00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 973 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3640 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.066 The recommended git tool is: git 00:00:00.066 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.100 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.546 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.556 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.568 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.568 > git config core.sparsecheckout # timeout=10 00:00:06.579 > git read-tree -mu HEAD # timeout=10 00:00:06.596 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.613 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.613 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.721 [Pipeline] Start of Pipeline 00:00:06.735 [Pipeline] library 00:00:06.736 Loading library shm_lib@master 00:00:06.737 Library shm_lib@master is cached. Copying from home. 00:00:06.751 [Pipeline] node 00:00:06.762 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:06.764 [Pipeline] { 00:00:06.773 [Pipeline] catchError 00:00:06.774 [Pipeline] { 00:00:06.783 [Pipeline] wrap 00:00:06.789 [Pipeline] { 00:00:06.794 [Pipeline] stage 00:00:06.796 [Pipeline] { (Prologue) 00:00:06.808 [Pipeline] echo 00:00:06.809 Node: VM-host-SM4 00:00:06.814 [Pipeline] cleanWs 00:00:06.822 [WS-CLEANUP] Deleting project workspace... 00:00:06.822 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.828 [WS-CLEANUP] done 00:00:07.024 [Pipeline] setCustomBuildProperty 00:00:07.165 [Pipeline] httpRequest 00:00:07.477 [Pipeline] echo 00:00:07.479 Sorcerer 10.211.164.20 is alive 00:00:07.489 [Pipeline] retry 00:00:07.491 [Pipeline] { 00:00:07.506 [Pipeline] httpRequest 00:00:07.510 HttpMethod: GET 00:00:07.511 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.511 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.531 Response Code: HTTP/1.1 200 OK 00:00:07.531 Success: Status code 200 is in the accepted range: 200,404 00:00:07.532 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.480 [Pipeline] } 00:00:12.496 [Pipeline] // retry 00:00:12.503 [Pipeline] sh 00:00:12.786 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.802 [Pipeline] httpRequest 00:00:13.442 [Pipeline] echo 00:00:13.445 Sorcerer 10.211.164.20 is alive 00:00:13.456 [Pipeline] retry 00:00:13.458 [Pipeline] { 00:00:13.472 [Pipeline] httpRequest 00:00:13.476 HttpMethod: GET 00:00:13.477 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.478 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.503 Response Code: HTTP/1.1 200 OK 00:00:13.503 Success: Status code 200 is in the accepted range: 200,404 00:00:13.504 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:27.610 [Pipeline] } 00:01:27.627 [Pipeline] // retry 00:01:27.634 [Pipeline] sh 00:01:27.911 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:30.452 [Pipeline] sh 00:01:30.735 + git -C spdk log --oneline -n5 00:01:30.735 c13c99a5e test: Various fixes for Fedora40 00:01:30.735 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:30.735 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:30.735 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:30.735 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:30.755 [Pipeline] withCredentials 00:01:30.765 > git --version # timeout=10 00:01:30.777 > git --version # 'git version 2.39.2' 00:01:30.794 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:30.798 [Pipeline] { 00:01:30.807 [Pipeline] retry 00:01:30.809 [Pipeline] { 00:01:30.824 [Pipeline] sh 00:01:31.111 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:31.383 [Pipeline] } 00:01:31.401 [Pipeline] // retry 00:01:31.407 [Pipeline] } 00:01:31.423 [Pipeline] // withCredentials 00:01:31.432 [Pipeline] httpRequest 00:01:31.942 [Pipeline] echo 00:01:31.944 Sorcerer 10.211.164.20 is alive 00:01:31.956 [Pipeline] retry 00:01:31.958 [Pipeline] { 00:01:31.972 [Pipeline] httpRequest 00:01:31.977 HttpMethod: GET 00:01:31.978 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.978 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.985 Response Code: HTTP/1.1 200 OK 00:01:31.986 Success: Status code 200 is in the accepted range: 200,404 00:01:31.987 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.456 [Pipeline] } 00:01:36.474 [Pipeline] // retry 00:01:36.482 [Pipeline] sh 00:01:36.761 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.152 [Pipeline] sh 00:01:38.433 + git -C dpdk log --oneline -n5 00:01:38.433 caf0f5d395 version: 22.11.4 00:01:38.433 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:38.433 dc9c799c7d vhost: fix missing spinlock unlock 00:01:38.433 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:38.433 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:38.454 [Pipeline] writeFile 00:01:38.472 [Pipeline] sh 00:01:38.759 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:38.771 [Pipeline] sh 00:01:39.052 + cat autorun-spdk.conf 00:01:39.052 SPDK_TEST_UNITTEST=1 00:01:39.052 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.052 SPDK_TEST_NVME=1 00:01:39.052 SPDK_TEST_BLOCKDEV=1 00:01:39.052 SPDK_RUN_ASAN=1 00:01:39.052 SPDK_RUN_UBSAN=1 00:01:39.052 SPDK_TEST_RAID5=1 00:01:39.052 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:39.052 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.052 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.060 RUN_NIGHTLY=1 00:01:39.062 [Pipeline] } 00:01:39.075 [Pipeline] // stage 00:01:39.090 [Pipeline] stage 00:01:39.092 [Pipeline] { (Run VM) 00:01:39.105 [Pipeline] sh 00:01:39.387 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:39.387 + echo 'Start stage prepare_nvme.sh' 00:01:39.387 Start stage prepare_nvme.sh 00:01:39.387 + [[ -n 10 ]] 00:01:39.387 + disk_prefix=ex10 00:01:39.387 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:39.387 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:39.387 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:39.387 ++ SPDK_TEST_UNITTEST=1 00:01:39.387 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.387 ++ SPDK_TEST_NVME=1 00:01:39.387 ++ SPDK_TEST_BLOCKDEV=1 00:01:39.387 ++ SPDK_RUN_ASAN=1 00:01:39.387 ++ SPDK_RUN_UBSAN=1 00:01:39.387 ++ SPDK_TEST_RAID5=1 00:01:39.387 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:39.387 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.387 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.387 ++ RUN_NIGHTLY=1 00:01:39.387 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:39.387 + nvme_files=() 00:01:39.387 + declare -A nvme_files 00:01:39.387 + backend_dir=/var/lib/libvirt/images/backends 00:01:39.387 + nvme_files['nvme.img']=5G 00:01:39.387 + nvme_files['nvme-cmb.img']=5G 00:01:39.387 + nvme_files['nvme-multi0.img']=4G 00:01:39.387 + nvme_files['nvme-multi1.img']=4G 00:01:39.387 + nvme_files['nvme-multi2.img']=4G 00:01:39.387 + nvme_files['nvme-openstack.img']=8G 00:01:39.387 + nvme_files['nvme-zns.img']=5G 00:01:39.387 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:39.387 + (( SPDK_TEST_FTL == 1 )) 00:01:39.387 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:39.387 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:39.387 + for nvme in "${!nvme_files[@]}" 00:01:39.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:39.387 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.387 + for nvme in "${!nvme_files[@]}" 00:01:39.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:39.387 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.387 + for nvme in "${!nvme_files[@]}" 00:01:39.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:39.646 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:39.646 + for nvme in "${!nvme_files[@]}" 00:01:39.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:39.646 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.646 + for nvme in "${!nvme_files[@]}" 00:01:39.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:39.646 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.905 + for nvme in "${!nvme_files[@]}" 00:01:39.905 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:39.905 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.905 + for nvme in "${!nvme_files[@]}" 00:01:39.905 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:39.905 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.905 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:40.165 + echo 'End stage prepare_nvme.sh' 00:01:40.165 End stage prepare_nvme.sh 00:01:40.178 [Pipeline] sh 00:01:40.470 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:40.470 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme.img -H -a -v -f ubuntu2204 00:01:40.470 00:01:40.470 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:40.470 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:40.470 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:40.470 HELP=0 00:01:40.470 DRY_RUN=0 00:01:40.470 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme.img, 00:01:40.470 NVME_DISKS_TYPE=nvme, 00:01:40.470 NVME_AUTO_CREATE=0 00:01:40.470 NVME_DISKS_NAMESPACES=, 00:01:40.470 NVME_CMB=, 00:01:40.470 NVME_PMR=, 00:01:40.470 NVME_ZNS=, 00:01:40.470 NVME_MS=, 00:01:40.470 NVME_FDP=, 00:01:40.470 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:40.470 SPDK_VAGRANT_VMCPU=10 00:01:40.470 SPDK_VAGRANT_VMRAM=12288 00:01:40.470 SPDK_VAGRANT_PROVIDER=libvirt 00:01:40.470 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:40.470 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:40.470 SPDK_OPENSTACK_NETWORK=0 00:01:40.470 VAGRANT_PACKAGE_BOX=0 00:01:40.470 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:40.470 FORCE_DISTRO=true 00:01:40.470 VAGRANT_BOX_VERSION= 00:01:40.470 EXTRA_VAGRANTFILES= 00:01:40.470 NIC_MODEL=e1000 00:01:40.470 00:01:40.470 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:40.470 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:43.007 Bringing machine 'default' up with 'libvirt' provider... 00:01:43.575 ==> default: Creating image (snapshot of base box volume). 00:01:43.575 ==> default: Creating domain with the following settings... 00:01:43.575 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1731890657_705a0fc36cf1b93f1c8f 00:01:43.575 ==> default: -- Domain type: kvm 00:01:43.575 ==> default: -- Cpus: 10 00:01:43.575 ==> default: -- Feature: acpi 00:01:43.575 ==> default: -- Feature: apic 00:01:43.575 ==> default: -- Feature: pae 00:01:43.575 ==> default: -- Memory: 12288M 00:01:43.575 ==> default: -- Memory Backing: hugepages: 00:01:43.575 ==> default: -- Management MAC: 00:01:43.575 ==> default: -- Loader: 00:01:43.575 ==> default: -- Nvram: 00:01:43.575 ==> default: -- Base box: spdk/ubuntu2204 00:01:43.575 ==> default: -- Storage pool: default 00:01:43.575 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1731890657_705a0fc36cf1b93f1c8f.img (20G) 00:01:43.575 ==> default: -- Volume Cache: default 00:01:43.575 ==> default: -- Kernel: 00:01:43.575 ==> default: -- Initrd: 00:01:43.575 ==> default: -- Graphics Type: vnc 00:01:43.575 ==> default: -- Graphics Port: -1 00:01:43.575 ==> default: -- Graphics IP: 127.0.0.1 00:01:43.575 ==> default: -- Graphics Password: Not defined 00:01:43.575 ==> default: -- Video Type: cirrus 00:01:43.575 ==> default: -- Video VRAM: 9216 00:01:43.575 ==> default: -- Sound Type: 00:01:43.575 ==> default: -- Keymap: en-us 00:01:43.575 ==> default: -- TPM Path: 00:01:43.575 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:43.575 ==> default: -- Command line args: 00:01:43.575 ==> default: -> value=-device, 00:01:43.576 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:43.576 ==> default: -> value=-drive, 00:01:43.576 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-0-drive0, 00:01:43.576 ==> default: -> value=-device, 00:01:43.576 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.834 ==> default: Creating shared folders metadata... 00:01:43.834 ==> default: Starting domain. 00:01:45.740 ==> default: Waiting for domain to get an IP address... 00:02:03.839 ==> default: Waiting for SSH to become available... 00:02:05.744 ==> default: Configuring and enabling network interfaces... 00:02:11.016 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:16.307 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:21.576 ==> default: Mounting SSHFS shared folder... 00:02:21.835 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:02:21.835 ==> default: Checking Mount.. 00:02:22.775 ==> default: Folder Successfully Mounted! 00:02:22.775 ==> default: Running provisioner: file... 00:02:23.035 default: ~/.gitconfig => .gitconfig 00:02:23.605 00:02:23.605 SUCCESS! 00:02:23.605 00:02:23.605 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:02:23.605 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:23.605 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:02:23.605 00:02:23.615 [Pipeline] } 00:02:23.632 [Pipeline] // stage 00:02:23.643 [Pipeline] dir 00:02:23.644 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:02:23.646 [Pipeline] { 00:02:23.659 [Pipeline] catchError 00:02:23.661 [Pipeline] { 00:02:23.678 [Pipeline] sh 00:02:23.965 + vagrant ssh-config --host vagrant 00:02:23.966 + sed -ne /^Host/,$p 00:02:23.966 + tee ssh_conf 00:02:27.265 Host vagrant 00:02:27.265 HostName 192.168.121.125 00:02:27.265 User vagrant 00:02:27.265 Port 22 00:02:27.265 UserKnownHostsFile /dev/null 00:02:27.265 StrictHostKeyChecking no 00:02:27.265 PasswordAuthentication no 00:02:27.265 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:27.265 IdentitiesOnly yes 00:02:27.265 LogLevel FATAL 00:02:27.265 ForwardAgent yes 00:02:27.265 ForwardX11 yes 00:02:27.265 00:02:27.278 [Pipeline] withEnv 00:02:27.281 [Pipeline] { 00:02:27.295 [Pipeline] sh 00:02:27.578 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:27.578 source /etc/os-release 00:02:27.578 [[ -e /image.version ]] && img=$(< /image.version) 00:02:27.578 # Minimal, systemd-like check. 00:02:27.578 if [[ -e /.dockerenv ]]; then 00:02:27.578 # Clear garbage from the node's name: 00:02:27.578 # agt-er_autotest_547-896 -> autotest_547-896 00:02:27.578 # $HOSTNAME is the actual container id 00:02:27.578 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:27.578 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:27.578 # We can assume this is a mount from a host where container is running, 00:02:27.578 # so fetch its hostname to easily identify the target swarm worker. 00:02:27.578 container="$(< /etc/hostname) ($agent)" 00:02:27.578 else 00:02:27.578 # Fallback 00:02:27.578 container=$agent 00:02:27.578 fi 00:02:27.578 fi 00:02:27.578 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:27.578 00:02:27.846 [Pipeline] } 00:02:27.856 [Pipeline] // withEnv 00:02:27.863 [Pipeline] setCustomBuildProperty 00:02:27.875 [Pipeline] stage 00:02:27.877 [Pipeline] { (Tests) 00:02:27.892 [Pipeline] sh 00:02:28.165 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:28.437 [Pipeline] sh 00:02:28.718 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:28.993 [Pipeline] timeout 00:02:28.993 Timeout set to expire in 1 hr 30 min 00:02:28.995 [Pipeline] { 00:02:29.011 [Pipeline] sh 00:02:29.294 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:29.861 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:29.874 [Pipeline] sh 00:02:30.154 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:30.427 [Pipeline] sh 00:02:30.708 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:30.983 [Pipeline] sh 00:02:31.353 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:02:31.612 ++ readlink -f spdk_repo 00:02:31.612 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:31.612 + [[ -n /home/vagrant/spdk_repo ]] 00:02:31.612 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:31.612 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:31.612 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:31.612 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:31.612 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:31.612 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:02:31.612 + cd /home/vagrant/spdk_repo 00:02:31.612 + source /etc/os-release 00:02:31.612 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:31.612 ++ NAME=Ubuntu 00:02:31.612 ++ VERSION_ID=22.04 00:02:31.612 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:31.612 ++ VERSION_CODENAME=jammy 00:02:31.612 ++ ID=ubuntu 00:02:31.612 ++ ID_LIKE=debian 00:02:31.612 ++ HOME_URL=https://www.ubuntu.com/ 00:02:31.612 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:31.612 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:31.612 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:31.612 ++ UBUNTU_CODENAME=jammy 00:02:31.612 + uname -a 00:02:31.612 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:31.612 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:31.871 Hugepages 00:02:31.871 node hugesize free / total 00:02:31.871 node0 1048576kB 0 / 0 00:02:31.871 node0 2048kB 0 / 0 00:02:31.871 00:02:31.871 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.871 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:31.871 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:31.871 + rm -f /tmp/spdk-ld-path 00:02:31.871 + source autorun-spdk.conf 00:02:31.871 ++ SPDK_TEST_UNITTEST=1 00:02:31.871 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.871 ++ SPDK_TEST_NVME=1 00:02:31.871 ++ SPDK_TEST_BLOCKDEV=1 00:02:31.871 ++ SPDK_RUN_ASAN=1 00:02:31.871 ++ SPDK_RUN_UBSAN=1 00:02:31.871 ++ SPDK_TEST_RAID5=1 00:02:31.871 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:31.871 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.871 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.871 ++ RUN_NIGHTLY=1 00:02:31.871 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:31.871 + [[ -n '' ]] 00:02:31.871 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:31.871 + for M in /var/spdk/build-*-manifest.txt 00:02:31.871 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:31.871 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.871 + for M in /var/spdk/build-*-manifest.txt 00:02:31.871 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:31.871 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.871 ++ uname 00:02:31.871 + [[ Linux == \L\i\n\u\x ]] 00:02:31.871 + sudo dmesg -T 00:02:31.871 + sudo dmesg --clear 00:02:31.871 + dmesg_pid=2805 00:02:31.871 + sudo dmesg -Tw 00:02:31.871 + [[ Ubuntu == FreeBSD ]] 00:02:31.871 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.871 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.871 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:31.871 + [[ -x /usr/src/fio-static/fio ]] 00:02:31.871 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:31.871 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:31.871 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:31.871 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:31.871 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:31.871 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:31.871 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:31.871 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:31.871 Test configuration: 00:02:31.871 SPDK_TEST_UNITTEST=1 00:02:31.871 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.871 SPDK_TEST_NVME=1 00:02:31.871 SPDK_TEST_BLOCKDEV=1 00:02:31.871 SPDK_RUN_ASAN=1 00:02:31.871 SPDK_RUN_UBSAN=1 00:02:31.871 SPDK_TEST_RAID5=1 00:02:31.871 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:31.871 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.871 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.131 RUN_NIGHTLY=1 00:45:06 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:32.131 00:45:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:32.131 00:45:06 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:32.131 00:45:06 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.131 00:45:06 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.131 00:45:06 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:32.131 00:45:06 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:32.131 00:45:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:32.131 00:45:06 -- paths/export.sh@5 -- $ export PATH 00:02:32.131 00:45:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:32.131 00:45:06 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:32.131 00:45:06 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:32.131 00:45:06 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731890706.XXXXXX 00:02:32.131 00:45:06 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731890706.fnb5HB 00:02:32.131 00:45:06 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:32.131 00:45:06 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:32.131 00:45:06 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.131 00:45:06 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:32.131 00:45:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:32.131 00:45:06 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:32.131 00:45:06 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:32.131 00:45:06 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:32.131 00:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.131 00:45:06 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:32.131 00:45:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:32.131 00:45:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:32.131 00:45:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:32.131 00:45:06 -- spdk/autobuild.sh@16 -- $ date -u 00:02:32.131 Mon Nov 18 00:45:06 UTC 2024 00:02:32.131 00:45:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:32.131 LTS-67-gc13c99a5e 00:02:32.131 00:45:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:32.131 00:45:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:32.131 00:45:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:32.131 00:45:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.131 00:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.131 ************************************ 00:02:32.131 START TEST asan 00:02:32.131 ************************************ 00:02:32.131 using asan 00:02:32.131 00:45:06 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:32.131 00:02:32.131 real 0m0.000s 00:02:32.131 user 0m0.000s 00:02:32.131 sys 0m0.000s 00:02:32.131 00:45:06 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:32.131 ************************************ 00:02:32.131 END TEST asan 00:02:32.131 ************************************ 00:02:32.131 00:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.131 00:45:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:32.131 00:45:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:32.131 00:45:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:32.131 00:45:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.131 00:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.131 ************************************ 00:02:32.131 START TEST ubsan 00:02:32.131 ************************************ 00:02:32.131 using ubsan 00:02:32.131 00:45:06 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:32.131 00:02:32.131 real 0m0.000s 00:02:32.131 user 0m0.000s 00:02:32.131 sys 0m0.000s 00:02:32.131 00:45:06 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:32.131 ************************************ 00:02:32.131 END TEST ubsan 00:02:32.131 00:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.131 ************************************ 00:02:32.131 00:45:06 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:32.131 00:45:06 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:32.131 00:45:06 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:32.131 00:45:06 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:32.131 00:45:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.131 00:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.131 ************************************ 00:02:32.131 START TEST build_native_dpdk 00:02:32.131 ************************************ 00:02:32.131 00:45:06 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:32.131 00:45:06 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:32.131 00:45:06 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:32.131 00:45:06 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:32.131 00:45:06 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:32.131 00:45:06 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:32.131 00:45:06 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:32.131 00:45:06 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:32.131 00:45:06 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:32.131 00:45:06 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:32.131 00:45:06 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:32.131 00:45:06 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:32.131 00:45:06 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:32.390 00:45:06 -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:02:32.390 00:45:06 -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:02:32.390 00:45:06 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:32.390 00:45:06 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.390 00:45:06 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:32.390 00:45:06 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:32.390 00:45:06 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:32.390 00:45:06 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:32.390 caf0f5d395 version: 22.11.4 00:02:32.390 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:32.390 dc9c799c7d vhost: fix missing spinlock unlock 00:02:32.390 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:32.390 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:32.390 00:45:06 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:32.390 00:45:06 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:32.390 00:45:06 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:32.390 00:45:06 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:32.390 00:45:06 -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:02:32.390 00:45:06 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:32.390 00:45:06 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:32.391 00:45:06 -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:02:32.391 00:45:06 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:32.391 00:45:06 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:32.391 00:45:06 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:32.391 00:45:06 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.391 00:45:06 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.391 00:45:06 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:32.391 00:45:06 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:32.391 00:45:06 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:32.391 00:45:06 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:32.391 00:45:06 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:32.391 00:45:06 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:32.391 00:45:06 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:32.391 00:45:06 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:32.391 00:45:06 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:32.391 00:45:06 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:32.391 00:45:06 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.391 00:45:06 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:32.391 00:45:06 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:32.391 00:45:06 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:32.391 00:45:06 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:32.391 00:45:06 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:32.391 00:45:06 -- scripts/common.sh@343 -- $ case "$op" in 00:02:32.391 00:45:06 -- scripts/common.sh@344 -- $ : 1 00:02:32.391 00:45:06 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:32.391 00:45:06 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.391 00:45:06 -- scripts/common.sh@364 -- $ decimal 22 00:02:32.391 00:45:06 -- scripts/common.sh@352 -- $ local d=22 00:02:32.391 00:45:06 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:32.391 00:45:06 -- scripts/common.sh@354 -- $ echo 22 00:02:32.391 00:45:06 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:32.391 00:45:06 -- scripts/common.sh@365 -- $ decimal 21 00:02:32.391 00:45:06 -- scripts/common.sh@352 -- $ local d=21 00:02:32.391 00:45:06 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:32.391 00:45:06 -- scripts/common.sh@354 -- $ echo 21 00:02:32.391 00:45:06 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:32.391 00:45:06 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:32.391 00:45:06 -- scripts/common.sh@366 -- $ return 1 00:02:32.391 00:45:06 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:32.391 patching file config/rte_config.h 00:02:32.391 Hunk #1 succeeded at 60 (offset 1 line). 00:02:32.391 00:45:06 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:32.391 00:45:06 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:32.391 00:45:06 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:32.391 00:45:06 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:32.391 00:45:06 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:32.391 00:45:06 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:32.391 00:45:06 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.391 00:45:06 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:32.391 00:45:06 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:32.391 00:45:06 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:32.391 00:45:06 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:32.391 00:45:06 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:32.391 00:45:06 -- scripts/common.sh@343 -- $ case "$op" in 00:02:32.391 00:45:06 -- scripts/common.sh@344 -- $ : 1 00:02:32.391 00:45:06 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:32.391 00:45:06 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.391 00:45:06 -- scripts/common.sh@364 -- $ decimal 22 00:02:32.391 00:45:06 -- scripts/common.sh@352 -- $ local d=22 00:02:32.391 00:45:06 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:32.391 00:45:06 -- scripts/common.sh@354 -- $ echo 22 00:02:32.391 00:45:06 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:32.391 00:45:06 -- scripts/common.sh@365 -- $ decimal 24 00:02:32.391 00:45:06 -- scripts/common.sh@352 -- $ local d=24 00:02:32.391 00:45:06 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:32.391 00:45:06 -- scripts/common.sh@354 -- $ echo 24 00:02:32.391 00:45:06 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:32.391 00:45:06 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:32.391 00:45:06 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:32.391 00:45:06 -- scripts/common.sh@367 -- $ return 0 00:02:32.391 00:45:06 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:32.391 patching file lib/pcapng/rte_pcapng.c 00:02:32.391 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:32.391 00:45:06 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:32.391 00:45:06 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:32.391 00:45:06 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:32.391 00:45:06 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:32.391 00:45:06 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:37.668 The Meson build system 00:02:37.668 Version: 1.4.0 00:02:37.668 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:37.668 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:37.668 Build type: native build 00:02:37.668 Program cat found: YES (/usr/bin/cat) 00:02:37.668 Project name: DPDK 00:02:37.668 Project version: 22.11.4 00:02:37.668 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:37.668 C linker for the host machine: gcc ld.bfd 2.38 00:02:37.668 Host machine cpu family: x86_64 00:02:37.668 Host machine cpu: x86_64 00:02:37.668 Message: ## Building in Developer Mode ## 00:02:37.668 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.668 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:37.668 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.668 Program objdump found: YES (/usr/bin/objdump) 00:02:37.668 Program python3 found: YES (/usr/bin/python3) 00:02:37.668 Program cat found: YES (/usr/bin/cat) 00:02:37.668 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.668 Checking for size of "void *" : 8 00:02:37.668 Checking for size of "void *" : 8 (cached) 00:02:37.668 Library m found: YES 00:02:37.668 Library numa found: YES 00:02:37.668 Has header "numaif.h" : YES 00:02:37.668 Library fdt found: NO 00:02:37.668 Library execinfo found: NO 00:02:37.668 Has header "execinfo.h" : YES 00:02:37.668 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:37.668 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.668 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.668 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.668 Run-time dependency openssl found: YES 3.0.2 00:02:37.668 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:37.668 Library pcap found: NO 00:02:37.668 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.668 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.668 Compiler for C supports arguments -Wformat: YES 00:02:37.668 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:37.668 Compiler for C supports arguments -Wformat-security: YES 00:02:37.668 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.668 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.668 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.668 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.668 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.668 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.668 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.668 Compiler for C supports arguments -Wundef: YES 00:02:37.668 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.668 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.668 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.668 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.668 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.668 Compiler for C supports arguments -mavx512f: YES 00:02:37.668 Checking if "AVX512 checking" compiles: YES 00:02:37.668 Fetching value of define "__SSE4_2__" : 1 00:02:37.668 Fetching value of define "__AES__" : 1 00:02:37.668 Fetching value of define "__AVX__" : 1 00:02:37.668 Fetching value of define "__AVX2__" : 1 00:02:37.668 Fetching value of define "__AVX512BW__" : 1 00:02:37.668 Fetching value of define "__AVX512CD__" : 1 00:02:37.668 Fetching value of define "__AVX512DQ__" : 1 00:02:37.668 Fetching value of define "__AVX512F__" : 1 00:02:37.668 Fetching value of define "__AVX512VL__" : 1 00:02:37.668 Fetching value of define "__PCLMUL__" : 1 00:02:37.668 Fetching value of define "__RDRND__" : 1 00:02:37.668 Fetching value of define "__RDSEED__" : 1 00:02:37.668 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.668 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.668 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.668 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.668 Checking for function "getentropy" : YES 00:02:37.668 Message: lib/eal: Defining dependency "eal" 00:02:37.668 Message: lib/ring: Defining dependency "ring" 00:02:37.668 Message: lib/rcu: Defining dependency "rcu" 00:02:37.668 Message: lib/mempool: Defining dependency "mempool" 00:02:37.668 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.668 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.668 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.668 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.668 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.668 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.668 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.668 Compiler for C supports arguments -mpclmul: YES 00:02:37.668 Compiler for C supports arguments -maes: YES 00:02:37.668 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.668 Compiler for C supports arguments -mavx512bw: YES 00:02:37.668 Compiler for C supports arguments -mavx512dq: YES 00:02:37.668 Compiler for C supports arguments -mavx512vl: YES 00:02:37.668 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.668 Compiler for C supports arguments -mavx2: YES 00:02:37.668 Compiler for C supports arguments -mavx: YES 00:02:37.668 Message: lib/net: Defining dependency "net" 00:02:37.668 Message: lib/meter: Defining dependency "meter" 00:02:37.668 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.668 Message: lib/pci: Defining dependency "pci" 00:02:37.668 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.669 Message: lib/metrics: Defining dependency "metrics" 00:02:37.669 Message: lib/hash: Defining dependency "hash" 00:02:37.669 Message: lib/timer: Defining dependency "timer" 00:02:37.669 Fetching value of define "__AVX2__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.669 Message: lib/acl: Defining dependency "acl" 00:02:37.669 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.669 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.669 Run-time dependency libelf found: YES 0.186 00:02:37.669 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:37.669 Message: lib/bpf: Defining dependency "bpf" 00:02:37.669 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.669 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.669 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.669 Message: lib/distributor: Defining dependency "distributor" 00:02:37.669 Message: lib/efd: Defining dependency "efd" 00:02:37.669 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.669 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.669 Message: lib/gro: Defining dependency "gro" 00:02:37.669 Message: lib/gso: Defining dependency "gso" 00:02:37.669 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.669 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.669 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.669 Message: lib/lpm: Defining dependency "lpm" 00:02:37.669 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:37.669 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:37.669 Message: lib/member: Defining dependency "member" 00:02:37.669 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.669 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.669 Message: lib/power: Defining dependency "power" 00:02:37.669 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.669 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.669 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.669 Message: lib/rib: Defining dependency "rib" 00:02:37.669 Message: lib/reorder: Defining dependency "reorder" 00:02:37.669 Message: lib/sched: Defining dependency "sched" 00:02:37.669 Message: lib/security: Defining dependency "security" 00:02:37.669 Message: lib/stack: Defining dependency "stack" 00:02:37.669 Has header "linux/userfaultfd.h" : YES 00:02:37.669 Message: lib/vhost: Defining dependency "vhost" 00:02:37.669 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.669 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.669 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.669 Message: lib/fib: Defining dependency "fib" 00:02:37.669 Message: lib/port: Defining dependency "port" 00:02:37.669 Message: lib/pdump: Defining dependency "pdump" 00:02:37.669 Message: lib/table: Defining dependency "table" 00:02:37.669 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.669 Message: lib/graph: Defining dependency "graph" 00:02:37.669 Message: lib/node: Defining dependency "node" 00:02:37.669 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.669 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.669 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.669 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.669 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:37.669 Compiler for C supports arguments -Wno-unused-value: YES 00:02:37.669 Compiler for C supports arguments -Wno-format: YES 00:02:37.669 Compiler for C supports arguments -Wno-format-security: YES 00:02:37.669 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.050 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.050 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.050 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:39.050 Fetching value of define "__AVX2__" : 1 (cached) 00:02:39.050 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.050 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.050 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.050 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.050 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:39.050 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:39.050 Program doxygen found: YES (/usr/bin/doxygen) 00:02:39.050 Configuring doxy-api.conf using configuration 00:02:39.050 Program sphinx-build found: NO 00:02:39.050 Configuring rte_build_config.h using configuration 00:02:39.050 Message: 00:02:39.050 ================= 00:02:39.050 Applications Enabled 00:02:39.050 ================= 00:02:39.050 00:02:39.050 apps: 00:02:39.050 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:39.050 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:39.050 00:02:39.050 00:02:39.050 Message: 00:02:39.050 ================= 00:02:39.050 Libraries Enabled 00:02:39.050 ================= 00:02:39.050 00:02:39.050 libs: 00:02:39.050 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:39.050 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:39.050 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:39.050 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:39.050 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:39.050 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:39.050 table, pipeline, graph, node, 00:02:39.050 00:02:39.050 Message: 00:02:39.050 =============== 00:02:39.050 Drivers Enabled 00:02:39.050 =============== 00:02:39.050 00:02:39.050 common: 00:02:39.050 00:02:39.050 bus: 00:02:39.050 pci, vdev, 00:02:39.050 mempool: 00:02:39.050 ring, 00:02:39.050 dma: 00:02:39.050 00:02:39.050 net: 00:02:39.050 i40e, 00:02:39.050 raw: 00:02:39.050 00:02:39.050 crypto: 00:02:39.050 00:02:39.050 compress: 00:02:39.050 00:02:39.050 regex: 00:02:39.050 00:02:39.050 vdpa: 00:02:39.050 00:02:39.050 event: 00:02:39.050 00:02:39.050 baseband: 00:02:39.050 00:02:39.050 gpu: 00:02:39.050 00:02:39.050 00:02:39.050 Message: 00:02:39.050 ================= 00:02:39.050 Content Skipped 00:02:39.050 ================= 00:02:39.050 00:02:39.050 apps: 00:02:39.050 dumpcap: missing dependency, "libpcap" 00:02:39.050 00:02:39.050 libs: 00:02:39.050 kni: explicitly disabled via build config (deprecated lib) 00:02:39.050 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:39.050 00:02:39.050 drivers: 00:02:39.050 common/cpt: not in enabled drivers build config 00:02:39.050 common/dpaax: not in enabled drivers build config 00:02:39.050 common/iavf: not in enabled drivers build config 00:02:39.050 common/idpf: not in enabled drivers build config 00:02:39.050 common/mvep: not in enabled drivers build config 00:02:39.050 common/octeontx: not in enabled drivers build config 00:02:39.050 bus/auxiliary: not in enabled drivers build config 00:02:39.050 bus/dpaa: not in enabled drivers build config 00:02:39.050 bus/fslmc: not in enabled drivers build config 00:02:39.050 bus/ifpga: not in enabled drivers build config 00:02:39.050 bus/vmbus: not in enabled drivers build config 00:02:39.050 common/cnxk: not in enabled drivers build config 00:02:39.050 common/mlx5: not in enabled drivers build config 00:02:39.050 common/qat: not in enabled drivers build config 00:02:39.050 common/sfc_efx: not in enabled drivers build config 00:02:39.050 mempool/bucket: not in enabled drivers build config 00:02:39.050 mempool/cnxk: not in enabled drivers build config 00:02:39.050 mempool/dpaa: not in enabled drivers build config 00:02:39.050 mempool/dpaa2: not in enabled drivers build config 00:02:39.050 mempool/octeontx: not in enabled drivers build config 00:02:39.050 mempool/stack: not in enabled drivers build config 00:02:39.050 dma/cnxk: not in enabled drivers build config 00:02:39.050 dma/dpaa: not in enabled drivers build config 00:02:39.050 dma/dpaa2: not in enabled drivers build config 00:02:39.050 dma/hisilicon: not in enabled drivers build config 00:02:39.050 dma/idxd: not in enabled drivers build config 00:02:39.050 dma/ioat: not in enabled drivers build config 00:02:39.050 dma/skeleton: not in enabled drivers build config 00:02:39.050 net/af_packet: not in enabled drivers build config 00:02:39.050 net/af_xdp: not in enabled drivers build config 00:02:39.050 net/ark: not in enabled drivers build config 00:02:39.050 net/atlantic: not in enabled drivers build config 00:02:39.050 net/avp: not in enabled drivers build config 00:02:39.050 net/axgbe: not in enabled drivers build config 00:02:39.050 net/bnx2x: not in enabled drivers build config 00:02:39.050 net/bnxt: not in enabled drivers build config 00:02:39.050 net/bonding: not in enabled drivers build config 00:02:39.050 net/cnxk: not in enabled drivers build config 00:02:39.050 net/cxgbe: not in enabled drivers build config 00:02:39.050 net/dpaa: not in enabled drivers build config 00:02:39.050 net/dpaa2: not in enabled drivers build config 00:02:39.050 net/e1000: not in enabled drivers build config 00:02:39.050 net/ena: not in enabled drivers build config 00:02:39.050 net/enetc: not in enabled drivers build config 00:02:39.050 net/enetfec: not in enabled drivers build config 00:02:39.050 net/enic: not in enabled drivers build config 00:02:39.050 net/failsafe: not in enabled drivers build config 00:02:39.050 net/fm10k: not in enabled drivers build config 00:02:39.050 net/gve: not in enabled drivers build config 00:02:39.050 net/hinic: not in enabled drivers build config 00:02:39.050 net/hns3: not in enabled drivers build config 00:02:39.050 net/iavf: not in enabled drivers build config 00:02:39.050 net/ice: not in enabled drivers build config 00:02:39.050 net/idpf: not in enabled drivers build config 00:02:39.050 net/igc: not in enabled drivers build config 00:02:39.050 net/ionic: not in enabled drivers build config 00:02:39.050 net/ipn3ke: not in enabled drivers build config 00:02:39.050 net/ixgbe: not in enabled drivers build config 00:02:39.050 net/kni: not in enabled drivers build config 00:02:39.050 net/liquidio: not in enabled drivers build config 00:02:39.050 net/mana: not in enabled drivers build config 00:02:39.050 net/memif: not in enabled drivers build config 00:02:39.050 net/mlx4: not in enabled drivers build config 00:02:39.050 net/mlx5: not in enabled drivers build config 00:02:39.050 net/mvneta: not in enabled drivers build config 00:02:39.050 net/mvpp2: not in enabled drivers build config 00:02:39.050 net/netvsc: not in enabled drivers build config 00:02:39.050 net/nfb: not in enabled drivers build config 00:02:39.050 net/nfp: not in enabled drivers build config 00:02:39.050 net/ngbe: not in enabled drivers build config 00:02:39.050 net/null: not in enabled drivers build config 00:02:39.051 net/octeontx: not in enabled drivers build config 00:02:39.051 net/octeon_ep: not in enabled drivers build config 00:02:39.051 net/pcap: not in enabled drivers build config 00:02:39.051 net/pfe: not in enabled drivers build config 00:02:39.051 net/qede: not in enabled drivers build config 00:02:39.051 net/ring: not in enabled drivers build config 00:02:39.051 net/sfc: not in enabled drivers build config 00:02:39.051 net/softnic: not in enabled drivers build config 00:02:39.051 net/tap: not in enabled drivers build config 00:02:39.051 net/thunderx: not in enabled drivers build config 00:02:39.051 net/txgbe: not in enabled drivers build config 00:02:39.051 net/vdev_netvsc: not in enabled drivers build config 00:02:39.051 net/vhost: not in enabled drivers build config 00:02:39.051 net/virtio: not in enabled drivers build config 00:02:39.051 net/vmxnet3: not in enabled drivers build config 00:02:39.051 raw/cnxk_bphy: not in enabled drivers build config 00:02:39.051 raw/cnxk_gpio: not in enabled drivers build config 00:02:39.051 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:39.051 raw/ifpga: not in enabled drivers build config 00:02:39.051 raw/ntb: not in enabled drivers build config 00:02:39.051 raw/skeleton: not in enabled drivers build config 00:02:39.051 crypto/armv8: not in enabled drivers build config 00:02:39.051 crypto/bcmfs: not in enabled drivers build config 00:02:39.051 crypto/caam_jr: not in enabled drivers build config 00:02:39.051 crypto/ccp: not in enabled drivers build config 00:02:39.051 crypto/cnxk: not in enabled drivers build config 00:02:39.051 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.051 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.051 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.051 crypto/mlx5: not in enabled drivers build config 00:02:39.051 crypto/mvsam: not in enabled drivers build config 00:02:39.051 crypto/nitrox: not in enabled drivers build config 00:02:39.051 crypto/null: not in enabled drivers build config 00:02:39.051 crypto/octeontx: not in enabled drivers build config 00:02:39.051 crypto/openssl: not in enabled drivers build config 00:02:39.051 crypto/scheduler: not in enabled drivers build config 00:02:39.051 crypto/uadk: not in enabled drivers build config 00:02:39.051 crypto/virtio: not in enabled drivers build config 00:02:39.051 compress/isal: not in enabled drivers build config 00:02:39.051 compress/mlx5: not in enabled drivers build config 00:02:39.051 compress/octeontx: not in enabled drivers build config 00:02:39.051 compress/zlib: not in enabled drivers build config 00:02:39.051 regex/mlx5: not in enabled drivers build config 00:02:39.051 regex/cn9k: not in enabled drivers build config 00:02:39.051 vdpa/ifc: not in enabled drivers build config 00:02:39.051 vdpa/mlx5: not in enabled drivers build config 00:02:39.051 vdpa/sfc: not in enabled drivers build config 00:02:39.051 event/cnxk: not in enabled drivers build config 00:02:39.051 event/dlb2: not in enabled drivers build config 00:02:39.051 event/dpaa: not in enabled drivers build config 00:02:39.051 event/dpaa2: not in enabled drivers build config 00:02:39.051 event/dsw: not in enabled drivers build config 00:02:39.051 event/opdl: not in enabled drivers build config 00:02:39.051 event/skeleton: not in enabled drivers build config 00:02:39.051 event/sw: not in enabled drivers build config 00:02:39.051 event/octeontx: not in enabled drivers build config 00:02:39.051 baseband/acc: not in enabled drivers build config 00:02:39.051 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:39.051 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:39.051 baseband/la12xx: not in enabled drivers build config 00:02:39.051 baseband/null: not in enabled drivers build config 00:02:39.051 baseband/turbo_sw: not in enabled drivers build config 00:02:39.051 gpu/cuda: not in enabled drivers build config 00:02:39.051 00:02:39.051 00:02:39.051 Build targets in project: 310 00:02:39.051 00:02:39.051 DPDK 22.11.4 00:02:39.051 00:02:39.051 User defined options 00:02:39.051 libdir : lib 00:02:39.051 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:39.051 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:39.051 c_link_args : 00:02:39.051 enable_docs : false 00:02:39.051 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.051 enable_kmods : false 00:02:39.051 machine : native 00:02:39.051 tests : false 00:02:39.051 00:02:39.051 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.051 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.310 00:45:13 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:39.310 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.310 [1/737] Generating lib/rte_kvargs_mingw with a custom command 00:02:39.310 [2/737] Generating lib/rte_telemetry_mingw with a custom command 00:02:39.310 [3/737] Generating lib/rte_kvargs_def with a custom command 00:02:39.310 [4/737] Generating lib/rte_telemetry_def with a custom command 00:02:39.310 [5/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.310 [6/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.593 [7/737] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.593 [8/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.593 [9/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.593 [10/737] Linking static target lib/librte_kvargs.a 00:02:39.593 [11/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.593 [12/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.593 [13/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.593 [14/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.593 [15/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.593 [16/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.593 [17/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.852 [18/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.852 [19/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.852 [20/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:39.852 [21/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.852 [22/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.852 [23/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.852 [24/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.852 [25/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.852 [26/737] Linking static target lib/librte_telemetry.a 00:02:39.852 [27/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.852 [28/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.852 [29/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.852 [30/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.852 [31/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.852 [32/737] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.852 [33/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.111 [34/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.111 [35/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.111 [36/737] Linking target lib/librte_kvargs.so.23.0 00:02:40.111 [37/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.111 [38/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.111 [39/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.111 [40/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.111 [41/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.111 [42/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.111 [43/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.370 [44/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.370 [45/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.370 [46/737] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:40.370 [47/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.370 [48/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.370 [49/737] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.370 [50/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.370 [51/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.370 [52/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.370 [53/737] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.370 [54/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.370 [55/737] Linking target lib/librte_telemetry.so.23.0 00:02:40.370 [56/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.629 [57/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.629 [58/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.629 [59/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.629 [60/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.629 [61/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.629 [62/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.629 [63/737] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:40.629 [64/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.629 [65/737] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.629 [66/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.629 [67/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.629 [68/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:40.629 [69/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.629 [70/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.629 [71/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.629 [72/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.629 [73/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.888 [74/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.888 [75/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.888 [76/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.888 [77/737] Generating lib/rte_eal_def with a custom command 00:02:40.888 [78/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.888 [79/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.888 [80/737] Generating lib/rte_eal_mingw with a custom command 00:02:40.888 [81/737] Generating lib/rte_ring_def with a custom command 00:02:40.888 [82/737] Generating lib/rte_ring_mingw with a custom command 00:02:40.888 [83/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.888 [84/737] Generating lib/rte_rcu_def with a custom command 00:02:40.888 [85/737] Generating lib/rte_rcu_mingw with a custom command 00:02:40.888 [86/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.888 [87/737] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.888 [88/737] Linking static target lib/librte_ring.a 00:02:40.888 [89/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.888 [90/737] Generating lib/rte_mempool_def with a custom command 00:02:41.147 [91/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.147 [92/737] Generating lib/rte_mempool_mingw with a custom command 00:02:41.147 [93/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.147 [94/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.147 [95/737] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.147 [96/737] Generating lib/rte_mbuf_def with a custom command 00:02:41.147 [97/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.147 [98/737] Generating lib/rte_mbuf_mingw with a custom command 00:02:41.405 [99/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.405 [100/737] Linking static target lib/librte_eal.a 00:02:41.405 [101/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.405 [102/737] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.405 [103/737] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.405 [104/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.405 [105/737] Linking static target lib/librte_rcu.a 00:02:41.405 [106/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.664 [107/737] Linking static target lib/librte_mempool.a 00:02:41.664 [108/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:41.664 [109/737] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.664 [110/737] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:41.664 [111/737] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:41.664 [112/737] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.664 [113/737] Generating lib/rte_net_def with a custom command 00:02:41.664 [114/737] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.664 [115/737] Generating lib/rte_net_mingw with a custom command 00:02:41.664 [116/737] Generating lib/rte_meter_def with a custom command 00:02:41.664 [117/737] Generating lib/rte_meter_mingw with a custom command 00:02:41.664 [118/737] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.664 [119/737] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.923 [120/737] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.923 [121/737] Linking static target lib/librte_meter.a 00:02:41.923 [122/737] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.923 [123/737] Linking static target lib/librte_net.a 00:02:41.923 [124/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.923 [125/737] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.923 [126/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.182 [127/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.182 [128/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.182 [129/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.182 [130/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.182 [131/737] Linking static target lib/librte_mbuf.a 00:02:42.182 [132/737] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.441 [133/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.441 [134/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:42.441 [135/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.441 [136/737] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.441 [137/737] Generating lib/rte_ethdev_def with a custom command 00:02:42.700 [138/737] Generating lib/rte_ethdev_mingw with a custom command 00:02:42.700 [139/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.700 [140/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.700 [141/737] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.700 [142/737] Linking static target lib/librte_pci.a 00:02:42.700 [143/737] Generating lib/rte_pci_def with a custom command 00:02:42.700 [144/737] Generating lib/rte_pci_mingw with a custom command 00:02:42.700 [145/737] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.700 [146/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.700 [147/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.958 [148/737] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.958 [149/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.958 [150/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.958 [151/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.958 [152/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.958 [153/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.958 [154/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.958 [155/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.958 [156/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.958 [157/737] Generating lib/rte_cmdline_def with a custom command 00:02:42.958 [158/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.958 [159/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.958 [160/737] Generating lib/rte_cmdline_mingw with a custom command 00:02:43.216 [161/737] Generating lib/rte_metrics_def with a custom command 00:02:43.216 [162/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:43.216 [163/737] Generating lib/rte_metrics_mingw with a custom command 00:02:43.216 [164/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.216 [165/737] Generating lib/rte_hash_def with a custom command 00:02:43.216 [166/737] Generating lib/rte_hash_mingw with a custom command 00:02:43.216 [167/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.216 [168/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:43.216 [169/737] Linking static target lib/librte_cmdline.a 00:02:43.216 [170/737] Generating lib/rte_timer_def with a custom command 00:02:43.216 [171/737] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.216 [172/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.216 [173/737] Generating lib/rte_timer_mingw with a custom command 00:02:43.475 [174/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:43.475 [175/737] Linking static target lib/librte_metrics.a 00:02:43.475 [176/737] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.475 [177/737] Linking static target lib/librte_timer.a 00:02:43.734 [178/737] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:43.734 [179/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:43.734 [180/737] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.734 [181/737] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.993 [182/737] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.993 [183/737] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:43.993 [184/737] Generating lib/rte_acl_def with a custom command 00:02:43.993 [185/737] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.993 [186/737] Generating lib/rte_acl_mingw with a custom command 00:02:43.993 [187/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.993 [188/737] Linking static target lib/librte_ethdev.a 00:02:43.993 [189/737] Generating lib/rte_bbdev_def with a custom command 00:02:44.252 [190/737] Generating lib/rte_bbdev_mingw with a custom command 00:02:44.252 [191/737] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:44.252 [192/737] Generating lib/rte_bitratestats_def with a custom command 00:02:44.252 [193/737] Generating lib/rte_bitratestats_mingw with a custom command 00:02:44.510 [194/737] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.510 [195/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:44.510 [196/737] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:44.510 [197/737] Linking static target lib/librte_bitratestats.a 00:02:44.768 [198/737] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:44.768 [199/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:44.768 [200/737] Linking static target lib/librte_bbdev.a 00:02:44.768 [201/737] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.025 [202/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:45.025 [203/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:45.283 [204/737] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.283 [205/737] Linking static target lib/librte_hash.a 00:02:45.283 [206/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.541 [207/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:45.541 [208/737] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.541 [209/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:45.541 [210/737] Generating lib/rte_bpf_def with a custom command 00:02:45.541 [211/737] Generating lib/rte_bpf_mingw with a custom command 00:02:45.541 [212/737] Generating lib/rte_cfgfile_def with a custom command 00:02:45.541 [213/737] Generating lib/rte_cfgfile_mingw with a custom command 00:02:45.798 [214/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:45.798 [215/737] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:45.798 [216/737] Linking static target lib/librte_cfgfile.a 00:02:46.056 [217/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:46.056 [218/737] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.056 [219/737] Generating lib/rte_compressdev_def with a custom command 00:02:46.315 [220/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.315 [221/737] Generating lib/rte_compressdev_mingw with a custom command 00:02:46.315 [222/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:46.315 [223/737] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.315 [224/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.315 [225/737] Generating lib/rte_cryptodev_def with a custom command 00:02:46.315 [226/737] Generating lib/rte_cryptodev_mingw with a custom command 00:02:46.315 [227/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:46.315 [228/737] Linking static target lib/librte_bpf.a 00:02:46.573 [229/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.573 [230/737] Linking static target lib/librte_compressdev.a 00:02:46.573 [231/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.831 [232/737] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.831 [233/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.831 [234/737] Generating lib/rte_distributor_def with a custom command 00:02:46.831 [235/737] Generating lib/rte_distributor_mingw with a custom command 00:02:46.831 [236/737] Generating lib/rte_efd_def with a custom command 00:02:46.831 [237/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:46.831 [238/737] Generating lib/rte_efd_mingw with a custom command 00:02:46.831 [239/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:46.831 [240/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:46.831 [241/737] Linking static target lib/librte_acl.a 00:02:47.089 [242/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:47.089 [243/737] Linking static target lib/librte_distributor.a 00:02:47.089 [244/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:47.348 [245/737] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.348 [246/737] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.348 [247/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:47.606 [248/737] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.606 [249/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:47.606 [250/737] Generating lib/rte_eventdev_def with a custom command 00:02:47.606 [251/737] Generating lib/rte_eventdev_mingw with a custom command 00:02:47.864 [252/737] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:47.864 [253/737] Linking static target lib/librte_efd.a 00:02:48.122 [254/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:48.122 [255/737] Generating lib/rte_gpudev_def with a custom command 00:02:48.122 [256/737] Generating lib/rte_gpudev_mingw with a custom command 00:02:48.122 [257/737] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.380 [258/737] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:48.380 [259/737] Linking static target lib/librte_gpudev.a 00:02:48.380 [260/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.380 [261/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:48.380 [262/737] Linking static target lib/librte_cryptodev.a 00:02:48.380 [263/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.638 [264/737] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:48.638 [265/737] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:48.638 [266/737] Generating lib/rte_gro_def with a custom command 00:02:48.638 [267/737] Generating lib/rte_gro_mingw with a custom command 00:02:48.638 [268/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:48.896 [269/737] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:48.896 [270/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:49.154 [271/737] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:49.154 [272/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:49.154 [273/737] Linking static target lib/librte_gro.a 00:02:49.154 [274/737] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:49.412 [275/737] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.412 [276/737] Generating lib/rte_gso_def with a custom command 00:02:49.412 [277/737] Generating lib/rte_gso_mingw with a custom command 00:02:49.412 [278/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:49.412 [279/737] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:49.412 [280/737] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.412 [281/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:49.412 [282/737] Linking static target lib/librte_eventdev.a 00:02:49.670 [283/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:49.670 [284/737] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:49.670 [285/737] Linking static target lib/librte_gso.a 00:02:49.670 [286/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:49.928 [287/737] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.928 [288/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:49.928 [289/737] Generating lib/rte_ip_frag_def with a custom command 00:02:49.928 [290/737] Generating lib/rte_ip_frag_mingw with a custom command 00:02:49.928 [291/737] Generating lib/rte_jobstats_def with a custom command 00:02:49.928 [292/737] Generating lib/rte_jobstats_mingw with a custom command 00:02:49.928 [293/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:49.928 [294/737] Generating lib/rte_latencystats_def with a custom command 00:02:49.928 [295/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:49.928 [296/737] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:49.928 [297/737] Generating lib/rte_latencystats_mingw with a custom command 00:02:49.928 [298/737] Linking static target lib/librte_jobstats.a 00:02:50.186 [299/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:50.186 [300/737] Generating lib/rte_lpm_def with a custom command 00:02:50.186 [301/737] Generating lib/rte_lpm_mingw with a custom command 00:02:50.186 [302/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:50.186 [303/737] Linking static target lib/librte_ip_frag.a 00:02:50.443 [304/737] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.443 [305/737] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:50.443 [306/737] Linking static target lib/librte_latencystats.a 00:02:50.443 [307/737] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:50.701 [308/737] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:50.701 [309/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:50.701 [310/737] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.701 [311/737] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.701 [312/737] Generating lib/rte_member_def with a custom command 00:02:50.701 [313/737] Generating lib/rte_member_mingw with a custom command 00:02:50.701 [314/737] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.701 [315/737] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:50.959 [316/737] Generating lib/rte_pcapng_def with a custom command 00:02:50.959 [317/737] Generating lib/rte_pcapng_mingw with a custom command 00:02:50.959 [318/737] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.959 [319/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:50.959 [320/737] Linking static target lib/librte_lpm.a 00:02:50.959 [321/737] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.216 [322/737] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:51.216 [323/737] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.216 [324/737] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.216 [325/737] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.475 [326/737] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.475 [327/737] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.475 [328/737] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.475 [329/737] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:51.475 [330/737] Linking static target lib/librte_pcapng.a 00:02:51.475 [331/737] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:51.475 [332/737] Linking target lib/librte_eal.so.23.0 00:02:51.475 [333/737] Generating lib/rte_power_def with a custom command 00:02:51.475 [334/737] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:51.475 [335/737] Generating lib/rte_power_mingw with a custom command 00:02:51.475 [336/737] Generating lib/rte_rawdev_def with a custom command 00:02:51.475 [337/737] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.475 [338/737] Generating lib/rte_rawdev_mingw with a custom command 00:02:51.733 [339/737] Generating lib/rte_regexdev_def with a custom command 00:02:51.733 [340/737] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:51.733 [341/737] Generating lib/rte_regexdev_mingw with a custom command 00:02:51.733 [342/737] Linking target lib/librte_ring.so.23.0 00:02:51.733 [343/737] Linking target lib/librte_meter.so.23.0 00:02:51.733 [344/737] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:51.733 [345/737] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.733 [346/737] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:51.733 [347/737] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:51.733 [348/737] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.733 [349/737] Linking target lib/librte_pci.so.23.0 00:02:51.733 [350/737] Linking target lib/librte_timer.so.23.0 00:02:51.733 [351/737] Linking target lib/librte_rcu.so.23.0 00:02:51.991 [352/737] Linking target lib/librte_mempool.so.23.0 00:02:51.991 [353/737] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:51.991 [354/737] Linking target lib/librte_acl.so.23.0 00:02:51.991 [355/737] Linking target lib/librte_cfgfile.so.23.0 00:02:51.991 [356/737] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.991 [357/737] Linking static target lib/librte_power.a 00:02:51.991 [358/737] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:51.991 [359/737] Linking static target lib/librte_rawdev.a 00:02:51.991 [360/737] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:51.991 [361/737] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:51.991 [362/737] Linking target lib/librte_jobstats.so.23.0 00:02:51.991 [363/737] Generating lib/rte_dmadev_mingw with a custom command 00:02:51.991 [364/737] Generating lib/rte_dmadev_def with a custom command 00:02:51.991 [365/737] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:51.991 [366/737] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:51.991 [367/737] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:51.991 [368/737] Linking static target lib/librte_regexdev.a 00:02:51.991 [369/737] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.991 [370/737] Generating lib/rte_rib_def with a custom command 00:02:51.991 [371/737] Linking target lib/librte_mbuf.so.23.0 00:02:51.991 [372/737] Generating lib/rte_rib_mingw with a custom command 00:02:51.991 [373/737] Generating lib/rte_reorder_def with a custom command 00:02:52.248 [374/737] Generating lib/rte_reorder_mingw with a custom command 00:02:52.248 [375/737] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:52.248 [376/737] Linking target lib/librte_net.so.23.0 00:02:52.248 [377/737] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:52.248 [378/737] Linking target lib/librte_bbdev.so.23.0 00:02:52.506 [379/737] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.506 [380/737] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:52.506 [381/737] Linking target lib/librte_compressdev.so.23.0 00:02:52.506 [382/737] Linking target lib/librte_cryptodev.so.23.0 00:02:52.506 [383/737] Linking target lib/librte_ethdev.so.23.0 00:02:52.506 [384/737] Linking target lib/librte_cmdline.so.23.0 00:02:52.506 [385/737] Linking target lib/librte_hash.so.23.0 00:02:52.506 [386/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:52.506 [387/737] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:52.506 [388/737] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.506 [389/737] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:52.506 [390/737] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:52.506 [391/737] Linking static target lib/librte_member.a 00:02:52.764 [392/737] Linking target lib/librte_distributor.so.23.0 00:02:52.764 [393/737] Linking target lib/librte_gpudev.so.23.0 00:02:52.764 [394/737] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:52.764 [395/737] Linking target lib/librte_metrics.so.23.0 00:02:52.764 [396/737] Linking target lib/librte_gro.so.23.0 00:02:52.764 [397/737] Linking target lib/librte_bpf.so.23.0 00:02:52.764 [398/737] Linking target lib/librte_gso.so.23.0 00:02:52.764 [399/737] Linking target lib/librte_efd.so.23.0 00:02:52.764 [400/737] Linking target lib/librte_eventdev.so.23.0 00:02:52.764 [401/737] Linking target lib/librte_ip_frag.so.23.0 00:02:52.764 [402/737] Linking target lib/librte_lpm.so.23.0 00:02:52.764 [403/737] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:52.764 [404/737] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:52.764 [405/737] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:53.021 [406/737] Linking target lib/librte_latencystats.so.23.0 00:02:53.022 [407/737] Linking target lib/librte_bitratestats.so.23.0 00:02:53.022 [408/737] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:53.022 [409/737] Linking static target lib/librte_dmadev.a 00:02:53.022 [410/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:53.022 [411/737] Linking static target lib/librte_reorder.a 00:02:53.022 [412/737] Linking static target lib/librte_rib.a 00:02:53.022 [413/737] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.022 [414/737] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:53.022 [415/737] Linking target lib/librte_rawdev.so.23.0 00:02:53.022 [416/737] Linking target lib/librte_pcapng.so.23.0 00:02:53.022 [417/737] Linking target lib/librte_regexdev.so.23.0 00:02:53.022 [418/737] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.022 [419/737] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.022 [420/737] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.022 [421/737] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:53.022 [422/737] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:53.022 [423/737] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:53.022 [424/737] Linking target lib/librte_power.so.23.0 00:02:53.022 [425/737] Generating lib/rte_sched_def with a custom command 00:02:53.022 [426/737] Generating lib/rte_sched_mingw with a custom command 00:02:53.022 [427/737] Generating lib/rte_security_def with a custom command 00:02:53.022 [428/737] Generating lib/rte_security_mingw with a custom command 00:02:53.022 [429/737] Linking target lib/librte_member.so.23.0 00:02:53.022 [430/737] Generating lib/rte_stack_def with a custom command 00:02:53.022 [431/737] Generating lib/rte_stack_mingw with a custom command 00:02:53.279 [432/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:53.279 [433/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:53.279 [434/737] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.279 [435/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:53.279 [436/737] Linking static target lib/librte_stack.a 00:02:53.279 [437/737] Linking target lib/librte_reorder.so.23.0 00:02:53.279 [438/737] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.538 [439/737] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.538 [440/737] Linking target lib/librte_stack.so.23.0 00:02:53.538 [441/737] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.538 [442/737] Linking target lib/librte_dmadev.so.23.0 00:02:53.538 [443/737] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.538 [444/737] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.538 [445/737] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:53.538 [446/737] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.795 [447/737] Linking static target lib/librte_security.a 00:02:53.795 [448/737] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.795 [449/737] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:53.795 [450/737] Generating lib/rte_vhost_def with a custom command 00:02:53.796 [451/737] Linking static target lib/librte_sched.a 00:02:53.796 [452/737] Generating lib/rte_vhost_mingw with a custom command 00:02:53.796 [453/737] Linking target lib/librte_rib.so.23.0 00:02:53.796 [454/737] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:53.796 [455/737] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.363 [456/737] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.363 [457/737] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.363 [458/737] Linking target lib/librte_sched.so.23.0 00:02:54.363 [459/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:54.363 [460/737] Linking target lib/librte_security.so.23.0 00:02:54.363 [461/737] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:54.363 [462/737] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:54.363 [463/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.363 [464/737] Generating lib/rte_ipsec_def with a custom command 00:02:54.363 [465/737] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:54.363 [466/737] Generating lib/rte_ipsec_mingw with a custom command 00:02:54.621 [467/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:54.621 [468/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.880 [469/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:54.880 [470/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:54.880 [471/737] Generating lib/rte_fib_def with a custom command 00:02:54.880 [472/737] Generating lib/rte_fib_mingw with a custom command 00:02:55.139 [473/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:55.139 [474/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:55.139 [475/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:55.139 [476/737] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:55.139 [477/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:55.139 [478/737] Linking static target lib/librte_ipsec.a 00:02:55.396 [479/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:55.397 [480/737] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:55.397 [481/737] Linking static target lib/librte_fib.a 00:02:55.654 [482/737] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:55.654 [483/737] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:55.655 [484/737] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:55.655 [485/737] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.655 [486/737] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:55.928 [487/737] Linking target lib/librte_ipsec.so.23.0 00:02:55.928 [488/737] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:55.928 [489/737] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.227 [490/737] Linking target lib/librte_fib.so.23.0 00:02:56.227 [491/737] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:56.227 [492/737] Generating lib/rte_port_def with a custom command 00:02:56.227 [493/737] Generating lib/rte_port_mingw with a custom command 00:02:56.485 [494/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:56.485 [495/737] Generating lib/rte_pdump_def with a custom command 00:02:56.485 [496/737] Generating lib/rte_pdump_mingw with a custom command 00:02:56.485 [497/737] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:56.485 [498/737] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:56.485 [499/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:56.485 [500/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:56.743 [501/737] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:56.743 [502/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:56.743 [503/737] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:56.743 [504/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:56.743 [505/737] Linking static target lib/librte_port.a 00:02:56.743 [506/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:57.001 [507/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:57.001 [508/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:57.001 [509/737] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:57.258 [510/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:57.258 [511/737] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:57.258 [512/737] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:57.258 [513/737] Linking static target lib/librte_pdump.a 00:02:57.517 [514/737] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.517 [515/737] Linking target lib/librte_pdump.so.23.0 00:02:57.517 [516/737] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.775 [517/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:57.775 [518/737] Linking target lib/librte_port.so.23.0 00:02:57.775 [519/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:57.775 [520/737] Generating lib/rte_table_def with a custom command 00:02:57.775 [521/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:57.775 [522/737] Generating lib/rte_table_mingw with a custom command 00:02:57.775 [523/737] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:58.034 [524/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:58.034 [525/737] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:58.034 [526/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:58.034 [527/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:58.034 [528/737] Generating lib/rte_pipeline_def with a custom command 00:02:58.292 [529/737] Generating lib/rte_pipeline_mingw with a custom command 00:02:58.292 [530/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:58.292 [531/737] Linking static target lib/librte_table.a 00:02:58.292 [532/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:58.550 [533/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:58.550 [534/737] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:58.808 [535/737] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:58.808 [536/737] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:59.066 [537/737] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:59.066 [538/737] Generating lib/rte_graph_def with a custom command 00:02:59.066 [539/737] Generating lib/rte_graph_mingw with a custom command 00:02:59.066 [540/737] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.066 [541/737] Linking target lib/librte_table.so.23.0 00:02:59.066 [542/737] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:59.324 [543/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:59.324 [544/737] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:59.324 [545/737] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:59.324 [546/737] Linking static target lib/librte_graph.a 00:02:59.324 [547/737] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:59.582 [548/737] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:59.839 [549/737] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:59.839 [550/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.839 [551/737] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:00.097 [552/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:00.097 [553/737] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:00.097 [554/737] Generating lib/rte_node_def with a custom command 00:03:00.097 [555/737] Generating lib/rte_node_mingw with a custom command 00:03:00.097 [556/737] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:00.097 [557/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:00.355 [558/737] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:00.355 [559/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.355 [560/737] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.355 [561/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.355 [562/737] Generating drivers/rte_bus_pci_def with a custom command 00:03:00.355 [563/737] Linking target lib/librte_graph.so.23.0 00:03:00.355 [564/737] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:00.355 [565/737] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:00.612 [566/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:00.612 [567/737] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:00.612 [568/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.612 [569/737] Linking static target lib/librte_node.a 00:03:00.612 [570/737] Generating drivers/rte_bus_vdev_def with a custom command 00:03:00.612 [571/737] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:00.612 [572/737] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:00.612 [573/737] Generating drivers/rte_mempool_ring_def with a custom command 00:03:00.612 [574/737] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:00.612 [575/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.612 [576/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:00.612 [577/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.612 [578/737] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:00.612 [579/737] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.870 [580/737] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.870 [581/737] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.870 [582/737] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.870 [583/737] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.870 [584/737] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.870 [585/737] Linking target lib/librte_node.so.23.0 00:03:00.870 [586/737] Linking static target drivers/librte_bus_pci.a 00:03:00.870 [587/737] Linking static target drivers/librte_bus_vdev.a 00:03:00.870 [588/737] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.127 [589/737] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.127 [590/737] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.385 [591/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:01.385 [592/737] Linking target drivers/librte_bus_vdev.so.23.0 00:03:01.385 [593/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:01.385 [594/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:01.385 [595/737] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.385 [596/737] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.385 [597/737] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:01.385 [598/737] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:01.385 [599/737] Linking target drivers/librte_bus_pci.so.23.0 00:03:01.643 [600/737] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.643 [601/737] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:01.643 [602/737] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.643 [603/737] Linking static target drivers/librte_mempool_ring.a 00:03:01.643 [604/737] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.643 [605/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:01.643 [606/737] Linking target drivers/librte_mempool_ring.so.23.0 00:03:02.208 [607/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:02.208 [608/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:02.773 [609/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:02.773 [610/737] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:02.773 [611/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:03.031 [612/737] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:03.031 [613/737] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:03.288 [614/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:03.288 [615/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:03.288 [616/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:03.546 [617/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:03.546 [618/737] Generating drivers/rte_net_i40e_def with a custom command 00:03:03.546 [619/737] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:03.546 [620/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:04.113 [621/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:04.371 [622/737] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:04.371 [623/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:04.629 [624/737] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:04.629 [625/737] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:04.887 [626/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:04.887 [627/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:04.887 [628/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:04.887 [629/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:05.146 [630/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:05.146 [631/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:05.404 [632/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:05.404 [633/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:05.663 [634/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:05.921 [635/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:05.921 [636/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:05.921 [637/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:05.921 [638/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:05.921 [639/737] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:05.921 [640/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:06.179 [641/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:06.179 [642/737] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:06.179 [643/737] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.179 [644/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:06.179 [645/737] Linking static target drivers/librte_net_i40e.a 00:03:06.179 [646/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:06.179 [647/737] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.437 [648/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:06.696 [649/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:06.696 [650/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:06.696 [651/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:06.696 [652/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:06.954 [653/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:06.954 [654/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:06.954 [655/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:07.213 [656/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:07.213 [657/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:07.213 [658/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:07.213 [659/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:07.213 [660/737] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.472 [661/737] Linking target drivers/librte_net_i40e.so.23.0 00:03:07.472 [662/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:07.472 [663/737] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.730 [664/737] Linking static target lib/librte_vhost.a 00:03:07.730 [665/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:07.730 [666/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:07.989 [667/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:07.989 [668/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:08.247 [669/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:08.506 [670/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:08.506 [671/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:08.506 [672/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:08.506 [673/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:08.765 [674/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:08.766 [675/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:08.766 [676/737] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:09.025 [677/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:09.025 [678/737] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:09.025 [679/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:09.025 [680/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:09.318 [681/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:09.318 [682/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:09.318 [683/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:09.318 [684/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:09.318 [685/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:09.587 [686/737] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.587 [687/737] Linking target lib/librte_vhost.so.23.0 00:03:09.587 [688/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:09.587 [689/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:09.846 [690/737] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:09.846 [691/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:10.105 [692/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:10.105 [693/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:10.364 [694/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:10.364 [695/737] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:10.622 [696/737] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:10.622 [697/737] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:10.622 [698/737] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:10.879 [699/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:10.879 [700/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:11.137 [701/737] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:11.137 [702/737] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:11.396 [703/737] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:11.396 [704/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:11.396 [705/737] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:11.963 [706/737] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:11.963 [707/737] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:11.963 [708/737] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:12.222 [709/737] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:12.222 [710/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:12.222 [711/737] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:12.480 [712/737] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:12.480 [713/737] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:12.480 [714/737] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:12.480 [715/737] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:12.738 [716/737] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:12.997 [717/737] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:17.186 [718/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:17.186 [719/737] Linking static target lib/librte_pipeline.a 00:03:17.455 [720/737] Linking target app/dpdk-proc-info 00:03:17.455 [721/737] Linking target app/dpdk-test-compress-perf 00:03:17.455 [722/737] Linking target app/dpdk-test-acl 00:03:17.455 [723/737] Linking target app/dpdk-test-cmdline 00:03:17.455 [724/737] Linking target app/dpdk-test-crypto-perf 00:03:17.455 [725/737] Linking target app/dpdk-test-bbdev 00:03:17.455 [726/737] Linking target app/dpdk-test-fib 00:03:17.455 [727/737] Linking target app/dpdk-pdump 00:03:17.455 [728/737] Linking target app/dpdk-test-eventdev 00:03:17.730 [729/737] Linking target app/dpdk-test-gpudev 00:03:17.730 [730/737] Linking target app/dpdk-test-flow-perf 00:03:17.730 [731/737] Linking target app/dpdk-test-regex 00:03:17.730 [732/737] Linking target app/dpdk-test-pipeline 00:03:17.730 [733/737] Linking target app/dpdk-test-sad 00:03:17.730 [734/737] Linking target app/dpdk-test-security-perf 00:03:17.989 [735/737] Linking target app/dpdk-testpmd 00:03:22.183 [736/737] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.183 [737/737] Linking target lib/librte_pipeline.so.23.0 00:03:22.183 00:45:55 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:22.183 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:22.183 [0/1] Installing files. 00:03:22.183 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.183 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.184 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.185 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.186 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.187 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.188 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.188 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.188 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.447 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.448 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.448 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.448 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.448 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.448 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.448 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.448 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.448 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.448 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.448 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.448 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.448 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:22.713 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:22.713 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:22.713 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:22.713 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:22.713 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:22.713 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:22.713 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:22.713 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:22.713 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:22.713 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:22.713 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:22.713 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:22.713 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:22.713 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:22.713 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:22.713 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:22.713 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:22.713 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:22.713 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:22.713 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:22.713 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:22.713 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:22.713 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:22.713 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:22.713 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:22.713 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:22.713 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:22.713 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:22.713 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:22.713 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:22.713 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:22.713 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:22.714 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:22.714 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:22.714 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:22.714 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:22.714 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:22.714 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:22.714 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:22.714 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:22.714 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:22.714 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:22.714 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:22.714 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:22.714 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:22.714 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:22.714 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:22.714 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:22.714 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:22.714 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:22.714 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:22.714 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:22.714 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:22.714 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:22.714 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:22.714 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:22.714 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:22.714 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:22.714 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:22.714 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:22.714 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:22.714 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:22.714 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:22.714 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:22.714 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:22.714 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:22.714 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:22.714 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:22.714 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:22.714 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:22.714 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:22.714 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:22.714 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:22.714 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:22.714 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:22.714 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:22.714 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:22.714 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:22.714 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:22.714 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:22.714 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:22.714 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:22.714 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:22.714 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:22.714 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:22.714 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:22.714 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:22.714 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:22.714 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:22.714 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:22.714 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:22.714 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:22.714 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:22.714 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:22.714 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:22.714 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:22.714 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:22.714 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:22.714 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:22.714 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:22.714 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:22.714 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:22.714 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:22.714 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:22.714 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:22.714 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:22.714 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:22.714 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:22.714 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:22.714 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:22.714 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:22.714 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:22.714 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:22.714 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:22.714 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:22.714 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:22.714 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:22.714 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:22.714 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:22.714 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:22.714 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:22.714 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:22.714 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:22.714 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:22.714 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:22.714 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:22.973 00:45:57 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:22.973 00:45:57 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:22.973 00:45:57 -- common/autobuild_common.sh@203 -- $ cat 00:03:22.973 00:45:57 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:22.973 00:03:22.973 real 0m50.613s 00:03:22.973 user 4m44.073s 00:03:22.973 sys 1m0.970s 00:03:22.973 00:45:57 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:22.973 ************************************ 00:03:22.973 END TEST build_native_dpdk 00:03:22.973 ************************************ 00:03:22.973 00:45:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.973 00:45:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:22.973 00:45:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:22.973 00:45:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:22.973 00:45:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:22.973 00:45:57 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:03:22.973 00:45:57 -- spdk/autobuild.sh@58 -- $ unittest_build 00:03:22.973 00:45:57 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:03:22.973 00:45:57 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:03:22.974 00:45:57 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:22.974 00:45:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.974 ************************************ 00:03:22.974 START TEST unittest_build 00:03:22.974 ************************************ 00:03:22.974 00:45:57 -- common/autotest_common.sh@1114 -- $ _unittest_build 00:03:22.974 00:45:57 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:03:22.974 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:22.974 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.974 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:22.974 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:23.541 Using 'verbs' RDMA provider 00:03:42.219 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:57.101 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:57.101 Creating mk/config.mk...done. 00:03:57.101 Creating mk/cc.flags.mk...done. 00:03:57.102 Type 'make' to build. 00:03:57.102 00:46:30 -- common/autobuild_common.sh@408 -- $ make -j10 00:03:57.102 make[1]: Nothing to be done for 'all'. 00:04:15.193 CC lib/log/log_flags.o 00:04:15.193 CC lib/log/log.o 00:04:15.193 CC lib/log/log_deprecated.o 00:04:15.193 CC lib/ut/ut.o 00:04:15.193 CC lib/ut_mock/mock.o 00:04:15.193 LIB libspdk_ut.a 00:04:15.193 LIB libspdk_ut_mock.a 00:04:15.193 LIB libspdk_log.a 00:04:15.193 CC lib/util/base64.o 00:04:15.193 CC lib/util/cpuset.o 00:04:15.193 CC lib/util/bit_array.o 00:04:15.193 CC lib/util/crc16.o 00:04:15.193 CC lib/util/crc32.o 00:04:15.193 CC lib/util/crc32c.o 00:04:15.193 CC lib/dma/dma.o 00:04:15.193 CC lib/ioat/ioat.o 00:04:15.193 CXX lib/trace_parser/trace.o 00:04:15.193 CC lib/vfio_user/host/vfio_user_pci.o 00:04:15.193 LIB libspdk_dma.a 00:04:15.193 CC lib/util/crc32_ieee.o 00:04:15.193 CC lib/util/crc64.o 00:04:15.193 CC lib/util/dif.o 00:04:15.193 CC lib/util/fd.o 00:04:15.193 CC lib/util/file.o 00:04:15.193 CC lib/util/hexlify.o 00:04:15.193 CC lib/util/iov.o 00:04:15.193 CC lib/vfio_user/host/vfio_user.o 00:04:15.193 CC lib/util/math.o 00:04:15.193 CC lib/util/pipe.o 00:04:15.193 LIB libspdk_ioat.a 00:04:15.193 CC lib/util/strerror_tls.o 00:04:15.193 CC lib/util/string.o 00:04:15.193 CC lib/util/uuid.o 00:04:15.193 CC lib/util/fd_group.o 00:04:15.193 CC lib/util/xor.o 00:04:15.193 CC lib/util/zipf.o 00:04:15.193 LIB libspdk_vfio_user.a 00:04:15.193 LIB libspdk_util.a 00:04:15.193 CC lib/rdma/common.o 00:04:15.193 CC lib/rdma/rdma_verbs.o 00:04:15.193 CC lib/vmd/vmd.o 00:04:15.193 CC lib/vmd/led.o 00:04:15.193 CC lib/conf/conf.o 00:04:15.193 CC lib/json/json_parse.o 00:04:15.193 CC lib/json/json_util.o 00:04:15.193 CC lib/idxd/idxd.o 00:04:15.193 CC lib/env_dpdk/env.o 00:04:15.193 LIB libspdk_trace_parser.a 00:04:15.193 CC lib/env_dpdk/memory.o 00:04:15.193 CC lib/env_dpdk/pci.o 00:04:15.193 CC lib/env_dpdk/init.o 00:04:15.451 CC lib/env_dpdk/threads.o 00:04:15.451 CC lib/json/json_write.o 00:04:15.451 LIB libspdk_conf.a 00:04:15.451 CC lib/env_dpdk/pci_ioat.o 00:04:15.451 LIB libspdk_rdma.a 00:04:15.451 CC lib/env_dpdk/pci_virtio.o 00:04:15.451 CC lib/env_dpdk/pci_vmd.o 00:04:15.451 CC lib/env_dpdk/pci_idxd.o 00:04:15.451 CC lib/env_dpdk/pci_event.o 00:04:15.709 LIB libspdk_json.a 00:04:15.710 CC lib/idxd/idxd_user.o 00:04:15.710 CC lib/env_dpdk/sigbus_handler.o 00:04:15.710 CC lib/env_dpdk/pci_dpdk.o 00:04:15.710 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:15.710 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:15.710 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.710 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.710 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.710 LIB libspdk_vmd.a 00:04:15.710 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.968 LIB libspdk_idxd.a 00:04:15.968 LIB libspdk_jsonrpc.a 00:04:16.227 CC lib/rpc/rpc.o 00:04:16.486 LIB libspdk_rpc.a 00:04:16.486 LIB libspdk_env_dpdk.a 00:04:16.486 CC lib/notify/notify.o 00:04:16.486 CC lib/notify/notify_rpc.o 00:04:16.486 CC lib/sock/sock.o 00:04:16.486 CC lib/sock/sock_rpc.o 00:04:16.745 CC lib/trace/trace.o 00:04:16.745 CC lib/trace/trace_flags.o 00:04:16.745 CC lib/trace/trace_rpc.o 00:04:16.745 LIB libspdk_notify.a 00:04:16.745 LIB libspdk_trace.a 00:04:17.004 LIB libspdk_sock.a 00:04:17.004 CC lib/thread/iobuf.o 00:04:17.004 CC lib/thread/thread.o 00:04:17.262 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:17.262 CC lib/nvme/nvme_ctrlr.o 00:04:17.262 CC lib/nvme/nvme_ns.o 00:04:17.262 CC lib/nvme/nvme_fabric.o 00:04:17.262 CC lib/nvme/nvme_ns_cmd.o 00:04:17.262 CC lib/nvme/nvme_pcie.o 00:04:17.262 CC lib/nvme/nvme_qpair.o 00:04:17.262 CC lib/nvme/nvme_pcie_common.o 00:04:17.262 CC lib/nvme/nvme.o 00:04:17.830 CC lib/nvme/nvme_quirks.o 00:04:17.830 CC lib/nvme/nvme_transport.o 00:04:17.830 CC lib/nvme/nvme_discovery.o 00:04:17.830 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:17.830 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:18.088 CC lib/nvme/nvme_tcp.o 00:04:18.088 CC lib/nvme/nvme_opal.o 00:04:18.088 CC lib/nvme/nvme_io_msg.o 00:04:18.088 CC lib/nvme/nvme_poll_group.o 00:04:18.088 CC lib/nvme/nvme_zns.o 00:04:18.088 CC lib/nvme/nvme_cuse.o 00:04:18.346 CC lib/nvme/nvme_vfio_user.o 00:04:18.346 CC lib/nvme/nvme_rdma.o 00:04:18.604 LIB libspdk_thread.a 00:04:18.604 CC lib/virtio/virtio.o 00:04:18.604 CC lib/blob/blobstore.o 00:04:18.604 CC lib/virtio/virtio_vhost_user.o 00:04:18.604 CC lib/init/json_config.o 00:04:18.604 CC lib/accel/accel.o 00:04:18.604 CC lib/accel/accel_rpc.o 00:04:18.863 CC lib/accel/accel_sw.o 00:04:18.863 CC lib/init/subsystem.o 00:04:18.863 CC lib/init/subsystem_rpc.o 00:04:18.863 CC lib/init/rpc.o 00:04:18.863 CC lib/virtio/virtio_vfio_user.o 00:04:19.121 CC lib/virtio/virtio_pci.o 00:04:19.121 CC lib/blob/request.o 00:04:19.121 LIB libspdk_init.a 00:04:19.121 CC lib/blob/zeroes.o 00:04:19.121 CC lib/blob/blob_bs_dev.o 00:04:19.122 CC lib/event/app.o 00:04:19.122 CC lib/event/reactor.o 00:04:19.380 LIB libspdk_virtio.a 00:04:19.380 CC lib/event/log_rpc.o 00:04:19.380 CC lib/event/app_rpc.o 00:04:19.380 CC lib/event/scheduler_static.o 00:04:19.638 LIB libspdk_nvme.a 00:04:19.638 LIB libspdk_event.a 00:04:19.638 LIB libspdk_accel.a 00:04:19.897 CC lib/bdev/bdev.o 00:04:19.897 CC lib/bdev/bdev_rpc.o 00:04:19.897 CC lib/bdev/bdev_zone.o 00:04:19.897 CC lib/bdev/part.o 00:04:19.897 CC lib/bdev/scsi_nvme.o 00:04:21.801 LIB libspdk_blob.a 00:04:21.801 CC lib/lvol/lvol.o 00:04:21.801 CC lib/blobfs/tree.o 00:04:21.801 CC lib/blobfs/blobfs.o 00:04:22.738 LIB libspdk_blobfs.a 00:04:22.738 LIB libspdk_lvol.a 00:04:22.738 LIB libspdk_bdev.a 00:04:22.996 CC lib/scsi/dev.o 00:04:22.996 CC lib/scsi/lun.o 00:04:22.996 CC lib/scsi/scsi.o 00:04:22.996 CC lib/scsi/port.o 00:04:22.996 CC lib/scsi/scsi_bdev.o 00:04:22.996 CC lib/scsi/scsi_pr.o 00:04:22.996 CC lib/ftl/ftl_core.o 00:04:22.996 CC lib/nvmf/ctrlr.o 00:04:22.996 CC lib/ftl/ftl_init.o 00:04:22.996 CC lib/nbd/nbd.o 00:04:23.255 CC lib/nbd/nbd_rpc.o 00:04:23.255 CC lib/scsi/scsi_rpc.o 00:04:23.255 CC lib/scsi/task.o 00:04:23.255 CC lib/nvmf/ctrlr_discovery.o 00:04:23.255 CC lib/nvmf/ctrlr_bdev.o 00:04:23.255 CC lib/ftl/ftl_layout.o 00:04:23.512 CC lib/ftl/ftl_debug.o 00:04:23.512 CC lib/ftl/ftl_io.o 00:04:23.512 CC lib/nvmf/subsystem.o 00:04:23.512 CC lib/nvmf/nvmf.o 00:04:23.512 LIB libspdk_nbd.a 00:04:23.770 CC lib/ftl/ftl_sb.o 00:04:23.770 CC lib/ftl/ftl_l2p.o 00:04:23.770 LIB libspdk_scsi.a 00:04:23.770 CC lib/ftl/ftl_l2p_flat.o 00:04:23.770 CC lib/ftl/ftl_nv_cache.o 00:04:23.770 CC lib/ftl/ftl_band.o 00:04:23.770 CC lib/ftl/ftl_band_ops.o 00:04:24.028 CC lib/ftl/ftl_writer.o 00:04:24.028 CC lib/iscsi/conn.o 00:04:24.028 CC lib/vhost/vhost.o 00:04:24.028 CC lib/nvmf/nvmf_rpc.o 00:04:24.028 CC lib/ftl/ftl_rq.o 00:04:24.287 CC lib/vhost/vhost_rpc.o 00:04:24.287 CC lib/vhost/vhost_scsi.o 00:04:24.287 CC lib/vhost/vhost_blk.o 00:04:24.545 CC lib/nvmf/transport.o 00:04:24.545 CC lib/nvmf/tcp.o 00:04:24.545 CC lib/iscsi/init_grp.o 00:04:24.545 CC lib/vhost/rte_vhost_user.o 00:04:24.545 CC lib/ftl/ftl_reloc.o 00:04:24.804 CC lib/nvmf/rdma.o 00:04:24.804 CC lib/ftl/ftl_l2p_cache.o 00:04:24.804 CC lib/iscsi/iscsi.o 00:04:24.804 CC lib/ftl/ftl_p2l.o 00:04:25.062 CC lib/ftl/mngt/ftl_mngt.o 00:04:25.062 CC lib/iscsi/md5.o 00:04:25.062 CC lib/iscsi/param.o 00:04:25.062 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:25.062 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:25.321 CC lib/iscsi/portal_grp.o 00:04:25.321 CC lib/iscsi/tgt_node.o 00:04:25.321 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:25.321 CC lib/iscsi/iscsi_subsystem.o 00:04:25.321 CC lib/iscsi/iscsi_rpc.o 00:04:25.321 CC lib/iscsi/task.o 00:04:25.321 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:25.579 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:25.579 LIB libspdk_vhost.a 00:04:25.579 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:25.579 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:25.579 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:25.838 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:25.838 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:25.838 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:25.838 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:25.838 CC lib/ftl/utils/ftl_conf.o 00:04:25.838 CC lib/ftl/utils/ftl_md.o 00:04:25.838 CC lib/ftl/utils/ftl_mempool.o 00:04:25.838 CC lib/ftl/utils/ftl_bitmap.o 00:04:25.838 CC lib/ftl/utils/ftl_property.o 00:04:25.838 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:25.838 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:26.097 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:26.097 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.097 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.097 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.097 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:26.097 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:26.097 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:26.097 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:26.356 LIB libspdk_iscsi.a 00:04:26.356 CC lib/ftl/base/ftl_base_dev.o 00:04:26.356 CC lib/ftl/base/ftl_base_bdev.o 00:04:26.356 CC lib/ftl/ftl_trace.o 00:04:26.614 LIB libspdk_ftl.a 00:04:26.872 LIB libspdk_nvmf.a 00:04:27.130 CC module/env_dpdk/env_dpdk_rpc.o 00:04:27.130 CC module/accel/ioat/accel_ioat.o 00:04:27.130 CC module/sock/posix/posix.o 00:04:27.130 CC module/scheduler/gscheduler/gscheduler.o 00:04:27.130 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:27.130 CC module/accel/iaa/accel_iaa.o 00:04:27.130 CC module/blob/bdev/blob_bdev.o 00:04:27.130 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:27.130 CC module/accel/error/accel_error.o 00:04:27.130 CC module/accel/dsa/accel_dsa.o 00:04:27.130 LIB libspdk_env_dpdk_rpc.a 00:04:27.388 CC module/accel/dsa/accel_dsa_rpc.o 00:04:27.388 LIB libspdk_scheduler_gscheduler.a 00:04:27.388 CC module/accel/ioat/accel_ioat_rpc.o 00:04:27.388 LIB libspdk_scheduler_dpdk_governor.a 00:04:27.388 CC module/accel/iaa/accel_iaa_rpc.o 00:04:27.388 CC module/accel/error/accel_error_rpc.o 00:04:27.388 LIB libspdk_accel_dsa.a 00:04:27.388 LIB libspdk_scheduler_dynamic.a 00:04:27.388 LIB libspdk_accel_ioat.a 00:04:27.388 LIB libspdk_accel_iaa.a 00:04:27.388 LIB libspdk_blob_bdev.a 00:04:27.388 LIB libspdk_accel_error.a 00:04:27.646 CC module/bdev/null/bdev_null.o 00:04:27.646 CC module/blobfs/bdev/blobfs_bdev.o 00:04:27.646 CC module/bdev/delay/vbdev_delay.o 00:04:27.646 CC module/bdev/gpt/gpt.o 00:04:27.646 CC module/bdev/lvol/vbdev_lvol.o 00:04:27.646 CC module/bdev/error/vbdev_error.o 00:04:27.646 CC module/bdev/passthru/vbdev_passthru.o 00:04:27.646 CC module/bdev/nvme/bdev_nvme.o 00:04:27.646 CC module/bdev/malloc/bdev_malloc.o 00:04:27.904 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:27.904 LIB libspdk_sock_posix.a 00:04:27.904 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:27.904 CC module/bdev/gpt/vbdev_gpt.o 00:04:27.904 CC module/bdev/error/vbdev_error_rpc.o 00:04:27.904 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:27.904 CC module/bdev/null/bdev_null_rpc.o 00:04:27.904 LIB libspdk_blobfs_bdev.a 00:04:27.904 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:28.163 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:28.163 LIB libspdk_bdev_malloc.a 00:04:28.163 CC module/bdev/nvme/nvme_rpc.o 00:04:28.163 CC module/bdev/nvme/bdev_mdns_client.o 00:04:28.163 LIB libspdk_bdev_passthru.a 00:04:28.163 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:28.163 LIB libspdk_bdev_null.a 00:04:28.163 LIB libspdk_bdev_error.a 00:04:28.163 LIB libspdk_bdev_delay.a 00:04:28.163 LIB libspdk_bdev_gpt.a 00:04:28.163 CC module/bdev/nvme/vbdev_opal.o 00:04:28.163 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.163 CC module/bdev/raid/bdev_raid.o 00:04:28.163 CC module/bdev/raid/bdev_raid_rpc.o 00:04:28.163 CC module/bdev/split/vbdev_split.o 00:04:28.421 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:28.421 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:28.421 LIB libspdk_bdev_lvol.a 00:04:28.421 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.421 CC module/bdev/raid/raid0.o 00:04:28.421 CC module/bdev/split/vbdev_split_rpc.o 00:04:28.421 CC module/bdev/raid/raid1.o 00:04:28.421 CC module/bdev/raid/concat.o 00:04:28.421 CC module/bdev/raid/raid5f.o 00:04:28.679 LIB libspdk_bdev_split.a 00:04:28.679 LIB libspdk_bdev_zone_block.a 00:04:28.679 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.679 CC module/bdev/aio/bdev_aio.o 00:04:28.679 CC module/bdev/aio/bdev_aio_rpc.o 00:04:28.679 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:28.679 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:28.679 CC module/bdev/ftl/bdev_ftl.o 00:04:28.679 CC module/bdev/iscsi/bdev_iscsi.o 00:04:28.939 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:28.939 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.939 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:29.219 LIB libspdk_bdev_aio.a 00:04:29.219 LIB libspdk_bdev_raid.a 00:04:29.219 LIB libspdk_bdev_ftl.a 00:04:29.219 LIB libspdk_bdev_iscsi.a 00:04:29.219 LIB libspdk_bdev_virtio.a 00:04:29.787 LIB libspdk_bdev_nvme.a 00:04:30.353 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.353 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.353 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.353 CC module/event/subsystems/sock/sock.o 00:04:30.353 CC module/event/subsystems/scheduler/scheduler.o 00:04:30.353 CC module/event/subsystems/vmd/vmd.o 00:04:30.353 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.353 LIB libspdk_event_vmd.a 00:04:30.353 LIB libspdk_event_vhost_blk.a 00:04:30.353 LIB libspdk_event_scheduler.a 00:04:30.353 LIB libspdk_event_iobuf.a 00:04:30.353 LIB libspdk_event_sock.a 00:04:30.612 CC module/event/subsystems/accel/accel.o 00:04:30.870 LIB libspdk_event_accel.a 00:04:31.129 CC module/event/subsystems/bdev/bdev.o 00:04:31.129 LIB libspdk_event_bdev.a 00:04:31.389 CC module/event/subsystems/nbd/nbd.o 00:04:31.389 CC module/event/subsystems/scsi/scsi.o 00:04:31.389 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:31.389 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:31.648 LIB libspdk_event_nbd.a 00:04:31.648 LIB libspdk_event_scsi.a 00:04:31.648 LIB libspdk_event_nvmf.a 00:04:31.907 CC module/event/subsystems/iscsi/iscsi.o 00:04:31.907 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.165 LIB libspdk_event_vhost_scsi.a 00:04:32.165 LIB libspdk_event_iscsi.a 00:04:32.165 CC app/trace_record/trace_record.o 00:04:32.165 CXX app/trace/trace.o 00:04:32.424 CC app/nvmf_tgt/nvmf_main.o 00:04:32.424 CC app/iscsi_tgt/iscsi_tgt.o 00:04:32.424 CC app/spdk_tgt/spdk_tgt.o 00:04:32.424 CC examples/accel/perf/accel_perf.o 00:04:32.424 CC test/accel/dif/dif.o 00:04:32.424 CC test/blobfs/mkfs/mkfs.o 00:04:32.424 CC test/bdev/bdevio/bdevio.o 00:04:32.424 CC test/app/bdev_svc/bdev_svc.o 00:04:32.683 LINK nvmf_tgt 00:04:32.683 LINK iscsi_tgt 00:04:32.683 LINK spdk_tgt 00:04:32.683 LINK spdk_trace_record 00:04:32.683 LINK mkfs 00:04:32.683 LINK bdev_svc 00:04:32.683 LINK spdk_trace 00:04:32.942 LINK dif 00:04:32.942 LINK accel_perf 00:04:32.942 LINK bdevio 00:04:33.202 CC examples/blob/hello_world/hello_blob.o 00:04:33.202 CC examples/bdev/hello_world/hello_bdev.o 00:04:33.461 LINK hello_blob 00:04:33.720 LINK hello_bdev 00:04:34.288 CC examples/bdev/bdevperf/bdevperf.o 00:04:34.936 LINK bdevperf 00:04:35.879 CC examples/blob/cli/blobcli.o 00:04:36.137 LINK blobcli 00:04:36.397 TEST_HEADER include/spdk/accel.h 00:04:36.397 TEST_HEADER include/spdk/accel_module.h 00:04:36.397 TEST_HEADER include/spdk/assert.h 00:04:36.397 TEST_HEADER include/spdk/barrier.h 00:04:36.397 TEST_HEADER include/spdk/base64.h 00:04:36.397 TEST_HEADER include/spdk/bdev.h 00:04:36.397 TEST_HEADER include/spdk/bdev_module.h 00:04:36.397 TEST_HEADER include/spdk/bdev_zone.h 00:04:36.397 TEST_HEADER include/spdk/bit_array.h 00:04:36.397 TEST_HEADER include/spdk/bit_pool.h 00:04:36.397 TEST_HEADER include/spdk/blob.h 00:04:36.397 TEST_HEADER include/spdk/blob_bdev.h 00:04:36.397 TEST_HEADER include/spdk/blobfs.h 00:04:36.397 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:36.397 TEST_HEADER include/spdk/conf.h 00:04:36.397 TEST_HEADER include/spdk/config.h 00:04:36.397 TEST_HEADER include/spdk/cpuset.h 00:04:36.397 TEST_HEADER include/spdk/crc16.h 00:04:36.397 TEST_HEADER include/spdk/crc32.h 00:04:36.397 TEST_HEADER include/spdk/crc64.h 00:04:36.397 TEST_HEADER include/spdk/dif.h 00:04:36.397 TEST_HEADER include/spdk/dma.h 00:04:36.397 TEST_HEADER include/spdk/endian.h 00:04:36.397 TEST_HEADER include/spdk/env.h 00:04:36.397 TEST_HEADER include/spdk/env_dpdk.h 00:04:36.397 TEST_HEADER include/spdk/event.h 00:04:36.397 TEST_HEADER include/spdk/fd.h 00:04:36.397 TEST_HEADER include/spdk/fd_group.h 00:04:36.397 TEST_HEADER include/spdk/file.h 00:04:36.397 TEST_HEADER include/spdk/ftl.h 00:04:36.397 TEST_HEADER include/spdk/gpt_spec.h 00:04:36.397 TEST_HEADER include/spdk/hexlify.h 00:04:36.397 TEST_HEADER include/spdk/histogram_data.h 00:04:36.397 TEST_HEADER include/spdk/idxd.h 00:04:36.397 TEST_HEADER include/spdk/idxd_spec.h 00:04:36.397 TEST_HEADER include/spdk/init.h 00:04:36.397 TEST_HEADER include/spdk/ioat.h 00:04:36.397 TEST_HEADER include/spdk/ioat_spec.h 00:04:36.397 TEST_HEADER include/spdk/iscsi_spec.h 00:04:36.397 TEST_HEADER include/spdk/json.h 00:04:36.655 TEST_HEADER include/spdk/jsonrpc.h 00:04:36.655 TEST_HEADER include/spdk/likely.h 00:04:36.655 TEST_HEADER include/spdk/log.h 00:04:36.655 TEST_HEADER include/spdk/lvol.h 00:04:36.655 TEST_HEADER include/spdk/memory.h 00:04:36.655 TEST_HEADER include/spdk/mmio.h 00:04:36.656 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:36.656 TEST_HEADER include/spdk/nbd.h 00:04:36.656 TEST_HEADER include/spdk/notify.h 00:04:36.656 TEST_HEADER include/spdk/nvme.h 00:04:36.656 TEST_HEADER include/spdk/nvme_intel.h 00:04:36.656 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:36.656 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:36.656 TEST_HEADER include/spdk/nvme_spec.h 00:04:36.656 TEST_HEADER include/spdk/nvme_zns.h 00:04:36.656 TEST_HEADER include/spdk/nvmf.h 00:04:36.656 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:36.656 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:36.656 TEST_HEADER include/spdk/nvmf_spec.h 00:04:36.656 TEST_HEADER include/spdk/nvmf_transport.h 00:04:36.656 TEST_HEADER include/spdk/opal.h 00:04:36.656 TEST_HEADER include/spdk/opal_spec.h 00:04:36.656 TEST_HEADER include/spdk/pci_ids.h 00:04:36.656 TEST_HEADER include/spdk/pipe.h 00:04:36.656 TEST_HEADER include/spdk/queue.h 00:04:36.656 TEST_HEADER include/spdk/reduce.h 00:04:36.656 TEST_HEADER include/spdk/rpc.h 00:04:36.656 TEST_HEADER include/spdk/scheduler.h 00:04:36.656 TEST_HEADER include/spdk/scsi.h 00:04:36.656 TEST_HEADER include/spdk/scsi_spec.h 00:04:36.656 TEST_HEADER include/spdk/sock.h 00:04:36.656 TEST_HEADER include/spdk/stdinc.h 00:04:36.656 TEST_HEADER include/spdk/string.h 00:04:36.656 TEST_HEADER include/spdk/thread.h 00:04:36.656 TEST_HEADER include/spdk/trace.h 00:04:36.656 TEST_HEADER include/spdk/trace_parser.h 00:04:36.656 TEST_HEADER include/spdk/tree.h 00:04:36.656 TEST_HEADER include/spdk/ublk.h 00:04:36.656 TEST_HEADER include/spdk/util.h 00:04:36.656 TEST_HEADER include/spdk/uuid.h 00:04:36.656 TEST_HEADER include/spdk/version.h 00:04:36.656 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:36.656 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:36.656 TEST_HEADER include/spdk/vhost.h 00:04:36.656 TEST_HEADER include/spdk/vmd.h 00:04:36.656 TEST_HEADER include/spdk/xor.h 00:04:36.656 TEST_HEADER include/spdk/zipf.h 00:04:36.656 CXX test/cpp_headers/accel.o 00:04:36.656 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:36.656 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:36.656 CC app/spdk_lspci/spdk_lspci.o 00:04:36.656 CXX test/cpp_headers/accel_module.o 00:04:36.914 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:36.914 LINK spdk_lspci 00:04:36.914 CXX test/cpp_headers/assert.o 00:04:36.914 LINK nvme_fuzz 00:04:37.174 CXX test/cpp_headers/barrier.o 00:04:37.174 CXX test/cpp_headers/base64.o 00:04:37.174 CXX test/cpp_headers/bdev.o 00:04:37.174 LINK vhost_fuzz 00:04:37.174 CC app/spdk_nvme_perf/perf.o 00:04:37.433 CC app/spdk_nvme_identify/identify.o 00:04:37.433 CXX test/cpp_headers/bdev_module.o 00:04:37.433 CXX test/cpp_headers/bdev_zone.o 00:04:37.692 CC test/dma/test_dma/test_dma.o 00:04:37.692 CXX test/cpp_headers/bit_array.o 00:04:37.951 CXX test/cpp_headers/bit_pool.o 00:04:37.951 CXX test/cpp_headers/blob.o 00:04:38.210 LINK test_dma 00:04:38.210 LINK spdk_nvme_perf 00:04:38.210 CXX test/cpp_headers/blob_bdev.o 00:04:38.210 LINK spdk_nvme_identify 00:04:38.469 CC test/event/event_perf/event_perf.o 00:04:38.469 CC test/env/mem_callbacks/mem_callbacks.o 00:04:38.469 CXX test/cpp_headers/blobfs.o 00:04:38.469 LINK iscsi_fuzz 00:04:38.469 CC test/event/reactor/reactor.o 00:04:38.469 LINK event_perf 00:04:38.469 LINK mem_callbacks 00:04:38.469 CXX test/cpp_headers/blobfs_bdev.o 00:04:38.469 CC test/event/reactor_perf/reactor_perf.o 00:04:38.728 LINK reactor 00:04:38.728 LINK reactor_perf 00:04:38.728 CC examples/ioat/perf/perf.o 00:04:38.728 CXX test/cpp_headers/conf.o 00:04:38.987 CXX test/cpp_headers/config.o 00:04:38.987 CXX test/cpp_headers/cpuset.o 00:04:38.987 LINK ioat_perf 00:04:38.987 CC test/env/vtophys/vtophys.o 00:04:39.245 CXX test/cpp_headers/crc16.o 00:04:39.245 LINK vtophys 00:04:39.245 CXX test/cpp_headers/crc32.o 00:04:39.504 CXX test/cpp_headers/crc64.o 00:04:39.504 CC test/nvme/aer/aer.o 00:04:39.504 CC test/lvol/esnap/esnap.o 00:04:39.763 CC test/event/app_repeat/app_repeat.o 00:04:39.763 CXX test/cpp_headers/dif.o 00:04:39.763 CC test/event/scheduler/scheduler.o 00:04:39.763 LINK app_repeat 00:04:39.763 CC examples/ioat/verify/verify.o 00:04:39.763 CXX test/cpp_headers/dma.o 00:04:40.022 LINK aer 00:04:40.022 CC app/spdk_nvme_discover/discovery_aer.o 00:04:40.022 CXX test/cpp_headers/endian.o 00:04:40.022 LINK scheduler 00:04:40.022 LINK verify 00:04:40.022 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:40.281 CC test/app/histogram_perf/histogram_perf.o 00:04:40.281 LINK spdk_nvme_discover 00:04:40.281 CXX test/cpp_headers/env.o 00:04:40.281 LINK env_dpdk_post_init 00:04:40.281 LINK histogram_perf 00:04:40.281 CXX test/cpp_headers/env_dpdk.o 00:04:40.540 CXX test/cpp_headers/event.o 00:04:40.799 CXX test/cpp_headers/fd.o 00:04:41.058 CC examples/nvme/hello_world/hello_world.o 00:04:41.058 CXX test/cpp_headers/fd_group.o 00:04:41.058 CXX test/cpp_headers/file.o 00:04:41.317 LINK hello_world 00:04:41.317 CC test/app/jsoncat/jsoncat.o 00:04:41.317 CXX test/cpp_headers/ftl.o 00:04:41.317 LINK jsoncat 00:04:41.577 CC app/spdk_top/spdk_top.o 00:04:41.577 CXX test/cpp_headers/gpt_spec.o 00:04:41.577 CC test/app/stub/stub.o 00:04:41.577 CC test/nvme/reset/reset.o 00:04:41.577 CXX test/cpp_headers/hexlify.o 00:04:41.912 LINK stub 00:04:41.912 CXX test/cpp_headers/histogram_data.o 00:04:41.912 LINK reset 00:04:41.912 CC test/env/memory/memory_ut.o 00:04:42.172 CXX test/cpp_headers/idxd.o 00:04:42.172 CC test/rpc_client/rpc_client_test.o 00:04:42.172 CXX test/cpp_headers/idxd_spec.o 00:04:42.767 CC test/env/pci/pci_ut.o 00:04:42.767 CXX test/cpp_headers/init.o 00:04:42.767 CC examples/nvme/reconnect/reconnect.o 00:04:42.767 LINK rpc_client_test 00:04:42.767 LINK memory_ut 00:04:43.026 LINK spdk_top 00:04:43.026 CXX test/cpp_headers/ioat.o 00:04:43.285 CXX test/cpp_headers/ioat_spec.o 00:04:43.285 LINK pci_ut 00:04:43.285 CXX test/cpp_headers/iscsi_spec.o 00:04:43.285 LINK reconnect 00:04:43.285 CXX test/cpp_headers/json.o 00:04:43.285 CC test/nvme/sgl/sgl.o 00:04:43.285 CC test/nvme/e2edp/nvme_dp.o 00:04:43.285 CXX test/cpp_headers/jsonrpc.o 00:04:43.544 CXX test/cpp_headers/likely.o 00:04:43.544 CC test/thread/poller_perf/poller_perf.o 00:04:43.544 CXX test/cpp_headers/log.o 00:04:43.544 LINK sgl 00:04:43.544 LINK nvme_dp 00:04:43.803 CC app/vhost/vhost.o 00:04:43.803 LINK poller_perf 00:04:43.803 CXX test/cpp_headers/lvol.o 00:04:43.803 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:43.803 CC app/spdk_dd/spdk_dd.o 00:04:43.803 LINK vhost 00:04:43.803 CXX test/cpp_headers/memory.o 00:04:44.062 LINK histogram_ut 00:04:44.063 CXX test/cpp_headers/mmio.o 00:04:44.063 CXX test/cpp_headers/nbd.o 00:04:44.063 CXX test/cpp_headers/notify.o 00:04:44.063 CXX test/cpp_headers/nvme.o 00:04:44.322 LINK spdk_dd 00:04:44.322 CXX test/cpp_headers/nvme_intel.o 00:04:44.322 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:44.322 CXX test/cpp_headers/nvme_ocssd.o 00:04:44.322 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:44.581 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:44.581 CC test/thread/lock/spdk_lock.o 00:04:44.581 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:44.840 CXX test/cpp_headers/nvme_spec.o 00:04:44.840 CXX test/cpp_headers/nvme_zns.o 00:04:45.099 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:45.099 CXX test/cpp_headers/nvmf.o 00:04:45.099 CC test/nvme/overhead/overhead.o 00:04:45.099 LINK esnap 00:04:45.099 LINK nvme_manage 00:04:45.099 CXX test/cpp_headers/nvmf_cmd.o 00:04:45.358 LINK overhead 00:04:45.358 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:45.618 CXX test/cpp_headers/nvmf_spec.o 00:04:45.618 CXX test/cpp_headers/nvmf_transport.o 00:04:45.618 CXX test/cpp_headers/opal.o 00:04:45.876 CXX test/cpp_headers/opal_spec.o 00:04:46.135 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:46.135 CXX test/cpp_headers/pci_ids.o 00:04:46.135 LINK spdk_lock 00:04:46.135 LINK accel_ut 00:04:46.135 CXX test/cpp_headers/pipe.o 00:04:46.394 CXX test/cpp_headers/queue.o 00:04:46.394 CC examples/nvme/arbitration/arbitration.o 00:04:46.394 CXX test/cpp_headers/reduce.o 00:04:46.654 CC test/nvme/err_injection/err_injection.o 00:04:46.654 CC examples/nvme/hotplug/hotplug.o 00:04:46.654 LINK blob_bdev_ut 00:04:46.654 CXX test/cpp_headers/rpc.o 00:04:46.654 LINK err_injection 00:04:46.654 LINK arbitration 00:04:46.654 CXX test/cpp_headers/scheduler.o 00:04:46.654 LINK hotplug 00:04:46.913 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.913 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:46.913 CXX test/cpp_headers/scsi.o 00:04:46.913 LINK cmb_copy 00:04:47.172 CXX test/cpp_headers/scsi_spec.o 00:04:47.172 CXX test/cpp_headers/sock.o 00:04:47.740 CXX test/cpp_headers/stdinc.o 00:04:47.741 CXX test/cpp_headers/string.o 00:04:47.741 CXX test/cpp_headers/thread.o 00:04:47.741 CC examples/nvme/abort/abort.o 00:04:47.741 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:47.741 CXX test/cpp_headers/trace.o 00:04:47.741 CC app/fio/nvme/fio_plugin.o 00:04:48.000 CXX test/cpp_headers/trace_parser.o 00:04:48.000 CXX test/cpp_headers/tree.o 00:04:48.000 CC test/nvme/startup/startup.o 00:04:48.000 LINK tree_ut 00:04:48.259 CXX test/cpp_headers/ublk.o 00:04:48.259 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:48.259 LINK startup 00:04:48.259 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:48.259 LINK abort 00:04:48.259 LINK part_ut 00:04:48.259 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:48.259 CXX test/cpp_headers/util.o 00:04:48.518 LINK spdk_nvme 00:04:48.518 LINK dma_ut 00:04:48.518 CXX test/cpp_headers/uuid.o 00:04:48.518 CXX test/cpp_headers/version.o 00:04:48.518 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.778 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:48.778 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.778 CC test/unit/lib/event/app.c/app_ut.o 00:04:48.778 LINK pmr_persistence 00:04:49.037 CXX test/cpp_headers/vhost.o 00:04:49.037 CXX test/cpp_headers/vmd.o 00:04:49.297 CXX test/cpp_headers/xor.o 00:04:49.297 LINK app_ut 00:04:49.297 CXX test/cpp_headers/zipf.o 00:04:49.297 LINK blobfs_async_ut 00:04:49.297 LINK blobfs_sync_ut 00:04:49.556 CC test/nvme/reserve/reserve.o 00:04:49.556 LINK bdev_ut 00:04:49.556 CC test/nvme/simple_copy/simple_copy.o 00:04:49.556 CC test/nvme/connect_stress/connect_stress.o 00:04:49.556 LINK reserve 00:04:49.815 CC app/fio/bdev/fio_plugin.o 00:04:49.815 CC examples/sock/hello_world/hello_sock.o 00:04:49.815 LINK connect_stress 00:04:49.815 LINK simple_copy 00:04:49.815 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:49.815 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:49.815 CC examples/vmd/lsvmd/lsvmd.o 00:04:50.073 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:50.073 LINK hello_sock 00:04:50.073 LINK lsvmd 00:04:50.073 LINK blobfs_bdev_ut 00:04:50.331 LINK spdk_bdev 00:04:50.331 LINK scsi_nvme_ut 00:04:50.331 CC examples/vmd/led/led.o 00:04:50.331 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:50.590 LINK reactor_ut 00:04:50.590 LINK led 00:04:50.590 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:50.849 LINK gpt_ut 00:04:50.849 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:51.107 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:51.107 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:51.107 CC test/nvme/boot_partition/boot_partition.o 00:04:51.107 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:51.107 LINK boot_partition 00:04:51.366 LINK bdev_zone_ut 00:04:51.366 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:51.366 LINK vbdev_lvol_ut 00:04:51.664 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:51.664 LINK bdev_raid_sb_ut 00:04:51.935 CC examples/nvmf/nvmf/nvmf.o 00:04:51.935 LINK concat_ut 00:04:51.935 CC examples/util/zipf/zipf.o 00:04:51.935 CC examples/thread/thread/thread_ex.o 00:04:51.935 LINK raid1_ut 00:04:52.194 LINK zipf 00:04:52.194 LINK nvmf 00:04:52.194 LINK thread 00:04:52.194 CC examples/idxd/perf/perf.o 00:04:52.452 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:52.452 CC test/nvme/compliance/nvme_compliance.o 00:04:52.452 LINK idxd_perf 00:04:52.711 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:52.711 LINK nvme_compliance 00:04:52.969 LINK bdev_raid_ut 00:04:52.969 LINK vbdev_zone_block_ut 00:04:53.228 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:53.228 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.487 LINK interrupt_tgt 00:04:53.487 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:53.487 LINK blob_ut 00:04:53.745 CC test/nvme/fused_ordering/fused_ordering.o 00:04:54.004 LINK ioat_ut 00:04:54.004 LINK fused_ordering 00:04:54.004 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:54.004 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:54.263 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:54.263 LINK doorbell_aers 00:04:54.263 LINK raid5f_ut 00:04:54.522 LINK bdev_ut 00:04:54.522 LINK init_grp_ut 00:04:54.522 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:54.780 CC test/nvme/fdp/fdp.o 00:04:55.038 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:55.038 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:55.297 LINK fdp 00:04:55.297 CC test/unit/lib/log/log.c/log_ut.o 00:04:55.297 LINK conn_ut 00:04:55.556 LINK jsonrpc_server_ut 00:04:55.556 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:55.556 LINK log_ut 00:04:55.556 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:55.814 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:55.814 CC test/nvme/cuse/cuse.o 00:04:55.814 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:56.072 LINK json_util_ut 00:04:56.072 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:56.331 LINK notify_ut 00:04:56.331 LINK json_write_ut 00:04:56.331 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:56.590 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:56.590 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:56.590 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:56.590 LINK cuse 00:04:56.848 LINK bdev_nvme_ut 00:04:56.848 LINK iscsi_ut 00:04:57.107 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:57.366 LINK nvme_ut 00:04:57.366 LINK nvme_ns_ut 00:04:57.366 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:57.366 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:57.366 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:57.625 LINK lvol_ut 00:04:57.625 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:57.625 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:57.625 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:57.625 LINK nvme_ctrlr_cmd_ut 00:04:57.625 LINK json_parse_ut 00:04:57.884 LINK param_ut 00:04:57.884 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:57.884 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:58.143 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:58.143 LINK portal_grp_ut 00:04:58.143 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:58.401 LINK tgt_node_ut 00:04:58.401 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:58.660 LINK nvme_ns_cmd_ut 00:04:58.660 LINK nvme_ns_ocssd_cmd_ut 00:04:58.660 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:58.660 LINK nvme_poll_group_ut 00:04:58.918 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:59.177 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:59.177 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:59.177 LINK nvme_pcie_ut 00:04:59.435 LINK nvme_ctrlr_ut 00:04:59.435 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:59.435 LINK nvme_quirks_ut 00:04:59.695 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:59.695 LINK ctrlr_bdev_ut 00:04:59.955 LINK dev_ut 00:04:59.955 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:05:00.262 LINK nvmf_ut 00:05:00.262 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:05:00.262 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:05:00.262 LINK nvme_qpair_ut 00:05:00.262 LINK subsystem_ut 00:05:00.262 LINK scsi_ut 00:05:00.535 LINK ctrlr_discovery_ut 00:05:00.535 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:05:00.535 CC test/unit/lib/sock/sock.c/sock_ut.o 00:05:00.794 CC test/unit/lib/sock/posix.c/posix_ut.o 00:05:00.794 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:05:00.794 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:05:00.794 LINK lun_ut 00:05:01.053 LINK ctrlr_ut 00:05:01.312 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:05:01.571 LINK nvme_io_msg_ut 00:05:01.571 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:05:01.571 LINK scsi_bdev_ut 00:05:01.571 LINK nvme_transport_ut 00:05:01.830 LINK posix_ut 00:05:01.830 LINK scsi_pr_ut 00:05:02.089 CC test/unit/lib/util/base64.c/base64_ut.o 00:05:02.089 CC test/unit/lib/thread/thread.c/thread_ut.o 00:05:02.089 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:05:02.089 LINK tcp_ut 00:05:02.089 LINK sock_ut 00:05:02.089 LINK base64_ut 00:05:02.348 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:05:02.348 LINK nvme_tcp_ut 00:05:02.348 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:05:02.607 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:05:02.607 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:05:02.607 LINK pci_event_ut 00:05:02.607 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:05:02.607 LINK bit_array_ut 00:05:02.870 LINK iobuf_ut 00:05:02.870 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:05:02.870 LINK cpuset_ut 00:05:03.129 LINK subsystem_ut 00:05:03.129 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:05:03.129 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:05:03.129 LINK rpc_ut 00:05:03.129 LINK crc16_ut 00:05:03.388 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:05:03.388 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:05:03.388 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:05:03.388 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:05:03.388 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:05:03.646 LINK crc32_ieee_ut 00:05:03.646 LINK idxd_user_ut 00:05:03.646 LINK rdma_ut 00:05:03.904 LINK nvme_fabric_ut 00:05:03.904 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:05:04.164 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:05:04.164 LINK idxd_ut 00:05:04.164 LINK crc32c_ut 00:05:04.164 LINK nvme_opal_ut 00:05:04.164 LINK nvme_pcie_common_ut 00:05:04.423 CC test/unit/lib/rdma/common.c/common_ut.o 00:05:04.423 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:05:04.423 LINK thread_ut 00:05:04.423 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:05:04.683 CC test/unit/lib/util/dif.c/dif_ut.o 00:05:04.683 CC test/unit/lib/util/iov.c/iov_ut.o 00:05:04.683 CC test/unit/lib/util/math.c/math_ut.o 00:05:04.683 LINK crc64_ut 00:05:04.683 LINK ftl_l2p_ut 00:05:04.683 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:05:04.942 LINK common_ut 00:05:04.942 LINK math_ut 00:05:04.942 LINK iov_ut 00:05:04.942 LINK transport_ut 00:05:04.942 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:05:04.942 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:05:05.201 LINK vhost_ut 00:05:05.201 CC test/unit/lib/util/string.c/string_ut.o 00:05:05.201 LINK pipe_ut 00:05:05.202 CC test/unit/lib/util/xor.c/xor_ut.o 00:05:05.202 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:05:05.460 LINK ftl_bitmap_ut 00:05:05.460 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:05:05.460 LINK xor_ut 00:05:05.460 LINK string_ut 00:05:05.460 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:05:05.718 LINK ftl_io_ut 00:05:05.718 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:05:05.718 LINK dif_ut 00:05:05.718 LINK ftl_mempool_ut 00:05:05.977 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:05:05.977 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:05:06.236 LINK ftl_band_ut 00:05:06.236 LINK nvme_rdma_ut 00:05:06.237 LINK ftl_mngt_ut 00:05:07.174 LINK ftl_sb_ut 00:05:07.174 LINK ftl_layout_upgrade_ut 00:05:07.174 LINK nvme_cuse_ut 00:05:07.742 00:05:07.742 real 1m44.666s 00:05:07.742 user 8m16.239s 00:05:07.742 sys 2m2.411s 00:05:07.742 00:47:41 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:07.742 00:47:41 -- common/autotest_common.sh@10 -- $ set +x 00:05:07.742 ************************************ 00:05:07.742 END TEST unittest_build 00:05:07.742 ************************************ 00:05:07.742 00:47:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:07.742 00:47:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:07.742 00:47:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:07.742 00:47:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:07.742 00:47:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:07.742 00:47:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:07.742 00:47:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:07.742 00:47:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:07.742 00:47:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:07.742 00:47:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.742 00:47:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:07.742 00:47:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:07.742 00:47:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:07.742 00:47:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:07.742 00:47:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:07.742 00:47:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:07.742 00:47:42 -- scripts/common.sh@344 -- # : 1 00:05:07.742 00:47:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:07.742 00:47:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.742 00:47:42 -- scripts/common.sh@364 -- # decimal 1 00:05:07.742 00:47:42 -- scripts/common.sh@352 -- # local d=1 00:05:07.742 00:47:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.742 00:47:42 -- scripts/common.sh@354 -- # echo 1 00:05:07.742 00:47:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:07.742 00:47:42 -- scripts/common.sh@365 -- # decimal 2 00:05:07.742 00:47:42 -- scripts/common.sh@352 -- # local d=2 00:05:07.742 00:47:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.742 00:47:42 -- scripts/common.sh@354 -- # echo 2 00:05:07.742 00:47:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:07.742 00:47:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:07.742 00:47:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:07.742 00:47:42 -- scripts/common.sh@367 -- # return 0 00:05:07.742 00:47:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.742 00:47:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.742 --rc genhtml_branch_coverage=1 00:05:07.742 --rc genhtml_function_coverage=1 00:05:07.742 --rc genhtml_legend=1 00:05:07.742 --rc geninfo_all_blocks=1 00:05:07.742 --rc geninfo_unexecuted_blocks=1 00:05:07.742 00:05:07.742 ' 00:05:07.742 00:47:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.742 --rc genhtml_branch_coverage=1 00:05:07.742 --rc genhtml_function_coverage=1 00:05:07.742 --rc genhtml_legend=1 00:05:07.742 --rc geninfo_all_blocks=1 00:05:07.742 --rc geninfo_unexecuted_blocks=1 00:05:07.742 00:05:07.742 ' 00:05:07.742 00:47:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.742 --rc genhtml_branch_coverage=1 00:05:07.742 --rc genhtml_function_coverage=1 00:05:07.742 --rc genhtml_legend=1 00:05:07.742 --rc geninfo_all_blocks=1 00:05:07.742 --rc geninfo_unexecuted_blocks=1 00:05:07.742 00:05:07.742 ' 00:05:07.742 00:47:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.742 --rc genhtml_branch_coverage=1 00:05:07.742 --rc genhtml_function_coverage=1 00:05:07.742 --rc genhtml_legend=1 00:05:07.742 --rc geninfo_all_blocks=1 00:05:07.742 --rc geninfo_unexecuted_blocks=1 00:05:07.742 00:05:07.742 ' 00:05:07.742 00:47:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:07.742 00:47:42 -- nvmf/common.sh@7 -- # uname -s 00:05:07.742 00:47:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.742 00:47:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.743 00:47:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.743 00:47:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.743 00:47:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.743 00:47:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.743 00:47:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.743 00:47:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.743 00:47:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.743 00:47:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.743 00:47:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da71da6f-5dfe-407f-b29a-5659af56b8e0 00:05:07.743 00:47:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=da71da6f-5dfe-407f-b29a-5659af56b8e0 00:05:07.743 00:47:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.743 00:47:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.743 00:47:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.743 00:47:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.743 00:47:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.743 00:47:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.743 00:47:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.743 00:47:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.743 00:47:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.743 00:47:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.743 00:47:42 -- paths/export.sh@5 -- # export PATH 00:05:07.743 00:47:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.743 00:47:42 -- nvmf/common.sh@46 -- # : 0 00:05:07.743 00:47:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:07.743 00:47:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:07.743 00:47:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:07.743 00:47:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.743 00:47:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.743 00:47:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:07.743 00:47:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:07.743 00:47:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:07.743 00:47:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:07.743 00:47:42 -- spdk/autotest.sh@32 -- # uname -s 00:05:07.743 00:47:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:07.743 00:47:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:05:07.743 00:47:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:07.743 00:47:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:07.743 00:47:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:07.743 00:47:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:08.003 00:47:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:08.003 00:47:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:05:08.003 00:47:42 -- spdk/autotest.sh@48 -- # udevadm_pid=104990 00:05:08.003 00:47:42 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:05:08.003 00:47:42 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:05:08.003 00:47:42 -- spdk/autotest.sh@54 -- # echo 105005 00:05:08.003 00:47:42 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:08.003 00:47:42 -- spdk/autotest.sh@56 -- # echo 105006 00:05:08.003 00:47:42 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:08.003 00:47:42 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:05:08.003 00:47:42 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:08.003 00:47:42 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:05:08.003 00:47:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.003 00:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:08.003 00:47:42 -- spdk/autotest.sh@70 -- # create_test_list 00:05:08.003 00:47:42 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:08.003 00:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:08.003 00:47:42 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:08.003 00:47:42 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:08.003 00:47:42 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:05:08.003 00:47:42 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:08.003 00:47:42 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:05:08.003 00:47:42 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:05:08.003 00:47:42 -- common/autotest_common.sh@1450 -- # uname 00:05:08.003 00:47:42 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:05:08.003 00:47:42 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:05:08.003 00:47:42 -- common/autotest_common.sh@1470 -- # uname 00:05:08.003 00:47:42 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:05:08.003 00:47:42 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:05:08.003 00:47:42 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:08.003 lcov: LCOV version 1.15 00:05:08.003 00:47:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:26.168 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:26.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:26.168 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:26.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:26.168 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:26.168 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:52.727 00:48:26 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:52.727 00:48:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.727 00:48:26 -- common/autotest_common.sh@10 -- # set +x 00:05:52.727 00:48:26 -- spdk/autotest.sh@89 -- # rm -f 00:05:52.727 00:48:26 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:52.727 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:52.727 00:48:26 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:52.727 00:48:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:52.727 00:48:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:52.727 00:48:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:52.727 00:48:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:52.727 00:48:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:52.727 00:48:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:52.727 00:48:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:52.727 00:48:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:52.727 00:48:26 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:52.727 00:48:26 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:05:52.727 00:48:26 -- spdk/autotest.sh@108 -- # grep -v p 00:05:52.727 00:48:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:52.727 00:48:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:52.727 00:48:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:52.727 00:48:26 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:52.727 00:48:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:52.727 No valid GPT data, bailing 00:05:52.727 00:48:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:52.727 00:48:26 -- scripts/common.sh@393 -- # pt= 00:05:52.727 00:48:26 -- scripts/common.sh@394 -- # return 1 00:05:52.727 00:48:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:52.727 1+0 records in 00:05:52.727 1+0 records out 00:05:52.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651403 s, 161 MB/s 00:05:52.727 00:48:26 -- spdk/autotest.sh@116 -- # sync 00:05:52.727 00:48:26 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:52.727 00:48:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:52.727 00:48:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:54.634 00:48:28 -- spdk/autotest.sh@122 -- # uname -s 00:05:54.634 00:48:28 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:54.634 00:48:28 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:54.634 00:48:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.634 00:48:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.634 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:05:54.634 ************************************ 00:05:54.634 START TEST setup.sh 00:05:54.634 ************************************ 00:05:54.634 00:48:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:54.634 * Looking for test storage... 00:05:54.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:54.634 00:48:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.634 00:48:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.634 00:48:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.893 00:48:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.893 00:48:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.893 00:48:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.893 00:48:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.893 00:48:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.893 00:48:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.893 00:48:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.893 00:48:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.893 00:48:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.893 00:48:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.893 00:48:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.893 00:48:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.893 00:48:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.893 00:48:29 -- scripts/common.sh@344 -- # : 1 00:05:54.893 00:48:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.893 00:48:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.893 00:48:29 -- scripts/common.sh@364 -- # decimal 1 00:05:54.893 00:48:29 -- scripts/common.sh@352 -- # local d=1 00:05:54.893 00:48:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.893 00:48:29 -- scripts/common.sh@354 -- # echo 1 00:05:54.893 00:48:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.893 00:48:29 -- scripts/common.sh@365 -- # decimal 2 00:05:54.893 00:48:29 -- scripts/common.sh@352 -- # local d=2 00:05:54.893 00:48:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.893 00:48:29 -- scripts/common.sh@354 -- # echo 2 00:05:54.893 00:48:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.893 00:48:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.893 00:48:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.893 00:48:29 -- scripts/common.sh@367 -- # return 0 00:05:54.893 00:48:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.893 00:48:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.893 --rc genhtml_branch_coverage=1 00:05:54.893 --rc genhtml_function_coverage=1 00:05:54.893 --rc genhtml_legend=1 00:05:54.893 --rc geninfo_all_blocks=1 00:05:54.893 --rc geninfo_unexecuted_blocks=1 00:05:54.893 00:05:54.893 ' 00:05:54.893 00:48:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.893 --rc genhtml_branch_coverage=1 00:05:54.893 --rc genhtml_function_coverage=1 00:05:54.893 --rc genhtml_legend=1 00:05:54.893 --rc geninfo_all_blocks=1 00:05:54.893 --rc geninfo_unexecuted_blocks=1 00:05:54.893 00:05:54.893 ' 00:05:54.893 00:48:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.893 --rc genhtml_branch_coverage=1 00:05:54.893 --rc genhtml_function_coverage=1 00:05:54.893 --rc genhtml_legend=1 00:05:54.893 --rc geninfo_all_blocks=1 00:05:54.893 --rc geninfo_unexecuted_blocks=1 00:05:54.893 00:05:54.893 ' 00:05:54.893 00:48:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.893 --rc genhtml_branch_coverage=1 00:05:54.893 --rc genhtml_function_coverage=1 00:05:54.893 --rc genhtml_legend=1 00:05:54.893 --rc geninfo_all_blocks=1 00:05:54.893 --rc geninfo_unexecuted_blocks=1 00:05:54.893 00:05:54.893 ' 00:05:54.893 00:48:29 -- setup/test-setup.sh@10 -- # uname -s 00:05:54.893 00:48:29 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:54.893 00:48:29 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:54.893 00:48:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.893 00:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.893 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:54.893 ************************************ 00:05:54.893 START TEST acl 00:05:54.893 ************************************ 00:05:54.893 00:48:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:54.893 * Looking for test storage... 00:05:54.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:54.893 00:48:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.893 00:48:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.893 00:48:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.893 00:48:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.893 00:48:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.893 00:48:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.893 00:48:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.893 00:48:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.893 00:48:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.893 00:48:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.893 00:48:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.893 00:48:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.893 00:48:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.893 00:48:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.893 00:48:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.153 00:48:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.153 00:48:29 -- scripts/common.sh@344 -- # : 1 00:05:55.153 00:48:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.153 00:48:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.153 00:48:29 -- scripts/common.sh@364 -- # decimal 1 00:05:55.153 00:48:29 -- scripts/common.sh@352 -- # local d=1 00:05:55.153 00:48:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.153 00:48:29 -- scripts/common.sh@354 -- # echo 1 00:05:55.153 00:48:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.153 00:48:29 -- scripts/common.sh@365 -- # decimal 2 00:05:55.153 00:48:29 -- scripts/common.sh@352 -- # local d=2 00:05:55.153 00:48:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.153 00:48:29 -- scripts/common.sh@354 -- # echo 2 00:05:55.153 00:48:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.153 00:48:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.153 00:48:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.153 00:48:29 -- scripts/common.sh@367 -- # return 0 00:05:55.153 00:48:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.153 00:48:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.153 --rc genhtml_branch_coverage=1 00:05:55.153 --rc genhtml_function_coverage=1 00:05:55.153 --rc genhtml_legend=1 00:05:55.153 --rc geninfo_all_blocks=1 00:05:55.153 --rc geninfo_unexecuted_blocks=1 00:05:55.153 00:05:55.153 ' 00:05:55.153 00:48:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.153 --rc genhtml_branch_coverage=1 00:05:55.153 --rc genhtml_function_coverage=1 00:05:55.153 --rc genhtml_legend=1 00:05:55.153 --rc geninfo_all_blocks=1 00:05:55.153 --rc geninfo_unexecuted_blocks=1 00:05:55.153 00:05:55.153 ' 00:05:55.153 00:48:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.153 --rc genhtml_branch_coverage=1 00:05:55.153 --rc genhtml_function_coverage=1 00:05:55.153 --rc genhtml_legend=1 00:05:55.153 --rc geninfo_all_blocks=1 00:05:55.153 --rc geninfo_unexecuted_blocks=1 00:05:55.153 00:05:55.153 ' 00:05:55.153 00:48:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.153 --rc genhtml_branch_coverage=1 00:05:55.153 --rc genhtml_function_coverage=1 00:05:55.153 --rc genhtml_legend=1 00:05:55.153 --rc geninfo_all_blocks=1 00:05:55.153 --rc geninfo_unexecuted_blocks=1 00:05:55.153 00:05:55.153 ' 00:05:55.153 00:48:29 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:55.153 00:48:29 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:55.153 00:48:29 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:55.153 00:48:29 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:55.153 00:48:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:55.153 00:48:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:55.153 00:48:29 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:55.153 00:48:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:55.153 00:48:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:55.153 00:48:29 -- setup/acl.sh@12 -- # devs=() 00:05:55.153 00:48:29 -- setup/acl.sh@12 -- # declare -a devs 00:05:55.153 00:48:29 -- setup/acl.sh@13 -- # drivers=() 00:05:55.153 00:48:29 -- setup/acl.sh@13 -- # declare -A drivers 00:05:55.153 00:48:29 -- setup/acl.sh@51 -- # setup reset 00:05:55.153 00:48:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:55.153 00:48:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.722 00:48:29 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:55.722 00:48:29 -- setup/acl.sh@16 -- # local dev driver 00:05:55.722 00:48:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:55.722 00:48:29 -- setup/acl.sh@15 -- # setup output status 00:05:55.722 00:48:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.722 00:48:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:55.722 Hugepages 00:05:55.722 node hugesize free / total 00:05:55.722 00:48:30 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:55.722 00:48:30 -- setup/acl.sh@19 -- # continue 00:05:55.722 00:48:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:55.722 00:05:55.722 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.722 00:48:30 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:55.722 00:48:30 -- setup/acl.sh@19 -- # continue 00:05:55.722 00:48:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:55.981 00:48:30 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:55.981 00:48:30 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:55.981 00:48:30 -- setup/acl.sh@20 -- # continue 00:05:55.981 00:48:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:55.981 00:48:30 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:55.981 00:48:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:55.981 00:48:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:55.981 00:48:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:55.981 00:48:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:55.981 00:48:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:55.981 00:48:30 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:55.981 00:48:30 -- setup/acl.sh@54 -- # run_test denied denied 00:05:55.981 00:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.981 00:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.981 00:48:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.981 ************************************ 00:05:55.981 START TEST denied 00:05:55.981 ************************************ 00:05:55.981 00:48:30 -- common/autotest_common.sh@1114 -- # denied 00:05:55.981 00:48:30 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:55.981 00:48:30 -- setup/acl.sh@38 -- # setup output config 00:05:55.981 00:48:30 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:55.981 00:48:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.981 00:48:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.517 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:58.517 00:48:32 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:58.517 00:48:32 -- setup/acl.sh@28 -- # local dev driver 00:05:58.517 00:48:32 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:58.517 00:48:32 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:58.517 00:48:32 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:58.517 00:48:32 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:58.517 00:48:32 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:58.517 00:48:32 -- setup/acl.sh@41 -- # setup reset 00:05:58.517 00:48:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:58.517 00:48:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.775 00:05:58.775 real 0m2.824s 00:05:58.775 user 0m0.581s 00:05:58.775 sys 0m2.317s 00:05:58.775 00:48:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.775 00:48:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.775 ************************************ 00:05:58.775 END TEST denied 00:05:58.775 ************************************ 00:05:59.034 00:48:33 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:59.034 00:48:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.034 00:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.034 00:48:33 -- common/autotest_common.sh@10 -- # set +x 00:05:59.034 ************************************ 00:05:59.034 START TEST allowed 00:05:59.034 ************************************ 00:05:59.034 00:48:33 -- common/autotest_common.sh@1114 -- # allowed 00:05:59.034 00:48:33 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:59.034 00:48:33 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:59.034 00:48:33 -- setup/acl.sh@45 -- # setup output config 00:05:59.034 00:48:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.034 00:48:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:01.572 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.572 00:48:35 -- setup/acl.sh@47 -- # verify 00:06:01.572 00:48:35 -- setup/acl.sh@28 -- # local dev driver 00:06:01.572 00:48:35 -- setup/acl.sh@48 -- # setup reset 00:06:01.572 00:48:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:01.572 00:48:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.139 00:06:02.139 real 0m3.019s 00:06:02.139 user 0m0.481s 00:06:02.139 sys 0m2.550s 00:06:02.139 00:48:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.139 00:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.139 ************************************ 00:06:02.139 END TEST allowed 00:06:02.139 ************************************ 00:06:02.139 00:06:02.139 real 0m7.196s 00:06:02.139 user 0m1.720s 00:06:02.139 sys 0m5.651s 00:06:02.139 00:48:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.139 00:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.139 ************************************ 00:06:02.139 END TEST acl 00:06:02.139 ************************************ 00:06:02.139 00:48:36 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:02.139 00:48:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.139 00:48:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.139 00:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.139 ************************************ 00:06:02.139 START TEST hugepages 00:06:02.139 ************************************ 00:06:02.139 00:48:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:02.139 * Looking for test storage... 00:06:02.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:02.139 00:48:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:02.139 00:48:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:02.139 00:48:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:02.399 00:48:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:02.399 00:48:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:02.399 00:48:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:02.399 00:48:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:02.399 00:48:36 -- scripts/common.sh@335 -- # IFS=.-: 00:06:02.399 00:48:36 -- scripts/common.sh@335 -- # read -ra ver1 00:06:02.399 00:48:36 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.399 00:48:36 -- scripts/common.sh@336 -- # read -ra ver2 00:06:02.399 00:48:36 -- scripts/common.sh@337 -- # local 'op=<' 00:06:02.399 00:48:36 -- scripts/common.sh@339 -- # ver1_l=2 00:06:02.399 00:48:36 -- scripts/common.sh@340 -- # ver2_l=1 00:06:02.399 00:48:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:02.399 00:48:36 -- scripts/common.sh@343 -- # case "$op" in 00:06:02.399 00:48:36 -- scripts/common.sh@344 -- # : 1 00:06:02.399 00:48:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:02.399 00:48:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.399 00:48:36 -- scripts/common.sh@364 -- # decimal 1 00:06:02.399 00:48:36 -- scripts/common.sh@352 -- # local d=1 00:06:02.399 00:48:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.399 00:48:36 -- scripts/common.sh@354 -- # echo 1 00:06:02.399 00:48:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:02.399 00:48:36 -- scripts/common.sh@365 -- # decimal 2 00:06:02.399 00:48:36 -- scripts/common.sh@352 -- # local d=2 00:06:02.399 00:48:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.399 00:48:36 -- scripts/common.sh@354 -- # echo 2 00:06:02.399 00:48:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:02.399 00:48:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:02.399 00:48:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:02.399 00:48:36 -- scripts/common.sh@367 -- # return 0 00:06:02.399 00:48:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.399 00:48:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:02.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.400 --rc genhtml_branch_coverage=1 00:06:02.400 --rc genhtml_function_coverage=1 00:06:02.400 --rc genhtml_legend=1 00:06:02.400 --rc geninfo_all_blocks=1 00:06:02.400 --rc geninfo_unexecuted_blocks=1 00:06:02.400 00:06:02.400 ' 00:06:02.400 00:48:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:02.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.400 --rc genhtml_branch_coverage=1 00:06:02.400 --rc genhtml_function_coverage=1 00:06:02.400 --rc genhtml_legend=1 00:06:02.400 --rc geninfo_all_blocks=1 00:06:02.400 --rc geninfo_unexecuted_blocks=1 00:06:02.400 00:06:02.400 ' 00:06:02.400 00:48:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:02.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.400 --rc genhtml_branch_coverage=1 00:06:02.400 --rc genhtml_function_coverage=1 00:06:02.400 --rc genhtml_legend=1 00:06:02.400 --rc geninfo_all_blocks=1 00:06:02.400 --rc geninfo_unexecuted_blocks=1 00:06:02.400 00:06:02.400 ' 00:06:02.400 00:48:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:02.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.400 --rc genhtml_branch_coverage=1 00:06:02.400 --rc genhtml_function_coverage=1 00:06:02.400 --rc genhtml_legend=1 00:06:02.400 --rc geninfo_all_blocks=1 00:06:02.400 --rc geninfo_unexecuted_blocks=1 00:06:02.400 00:06:02.400 ' 00:06:02.400 00:48:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:02.400 00:48:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:02.400 00:48:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:02.400 00:48:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:02.400 00:48:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:02.400 00:48:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:02.400 00:48:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:02.400 00:48:36 -- setup/common.sh@18 -- # local node= 00:06:02.400 00:48:36 -- setup/common.sh@19 -- # local var val 00:06:02.400 00:48:36 -- setup/common.sh@20 -- # local mem_f mem 00:06:02.400 00:48:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.400 00:48:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.400 00:48:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.400 00:48:36 -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.400 00:48:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 1655356 kB' 'MemAvailable: 7380736 kB' 'Buffers: 45368 kB' 'Cached: 5772176 kB' 'SwapCached: 0 kB' 'Active: 1644828 kB' 'Inactive: 4305188 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143048 kB' 'Active(file): 1643740 kB' 'Inactive(file): 4162140 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 208 kB' 'Writeback: 4 kB' 'AnonPages: 161816 kB' 'Mapped: 69496 kB' 'Shmem: 2600 kB' 'KReclaimable: 240804 kB' 'Slab: 312700 kB' 'SReclaimable: 240804 kB' 'SUnreclaim: 71896 kB' 'KernelStack: 5104 kB' 'PageTables: 3740 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 510716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.400 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.400 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # continue 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # IFS=': ' 00:06:02.401 00:48:36 -- setup/common.sh@31 -- # read -r var val _ 00:06:02.401 00:48:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:02.401 00:48:36 -- setup/common.sh@33 -- # echo 2048 00:06:02.401 00:48:36 -- setup/common.sh@33 -- # return 0 00:06:02.401 00:48:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:02.401 00:48:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:02.401 00:48:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:02.401 00:48:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:02.401 00:48:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:02.401 00:48:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:02.401 00:48:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:02.401 00:48:36 -- setup/hugepages.sh@207 -- # get_nodes 00:06:02.401 00:48:36 -- setup/hugepages.sh@27 -- # local node 00:06:02.401 00:48:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.401 00:48:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:02.401 00:48:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:02.401 00:48:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:02.401 00:48:36 -- setup/hugepages.sh@208 -- # clear_hp 00:06:02.401 00:48:36 -- setup/hugepages.sh@37 -- # local node hp 00:06:02.401 00:48:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:02.401 00:48:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:02.401 00:48:36 -- setup/hugepages.sh@41 -- # echo 0 00:06:02.401 00:48:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:02.401 00:48:36 -- setup/hugepages.sh@41 -- # echo 0 00:06:02.401 00:48:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:02.401 00:48:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:02.401 00:48:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:02.401 00:48:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.401 00:48:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.401 00:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.401 ************************************ 00:06:02.401 START TEST default_setup 00:06:02.401 ************************************ 00:06:02.401 00:48:36 -- common/autotest_common.sh@1114 -- # default_setup 00:06:02.401 00:48:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:02.401 00:48:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:02.401 00:48:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:02.401 00:48:36 -- setup/hugepages.sh@51 -- # shift 00:06:02.401 00:48:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:02.401 00:48:36 -- setup/hugepages.sh@52 -- # local node_ids 00:06:02.401 00:48:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:02.401 00:48:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:02.401 00:48:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:02.401 00:48:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:02.401 00:48:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:02.401 00:48:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:02.401 00:48:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:02.401 00:48:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:02.401 00:48:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:02.401 00:48:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:02.401 00:48:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:02.401 00:48:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:02.401 00:48:36 -- setup/hugepages.sh@73 -- # return 0 00:06:02.401 00:48:36 -- setup/hugepages.sh@137 -- # setup output 00:06:02.401 00:48:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.401 00:48:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:02.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:02.969 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.910 00:48:38 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:03.910 00:48:38 -- setup/hugepages.sh@89 -- # local node 00:06:03.910 00:48:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:03.910 00:48:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:03.910 00:48:38 -- setup/hugepages.sh@92 -- # local surp 00:06:03.910 00:48:38 -- setup/hugepages.sh@93 -- # local resv 00:06:03.910 00:48:38 -- setup/hugepages.sh@94 -- # local anon 00:06:03.910 00:48:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:03.910 00:48:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:03.910 00:48:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:03.910 00:48:38 -- setup/common.sh@18 -- # local node= 00:06:03.910 00:48:38 -- setup/common.sh@19 -- # local var val 00:06:03.910 00:48:38 -- setup/common.sh@20 -- # local mem_f mem 00:06:03.910 00:48:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.910 00:48:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.910 00:48:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.910 00:48:38 -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.910 00:48:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3756324 kB' 'MemAvailable: 9481812 kB' 'Buffers: 45368 kB' 'Cached: 5772216 kB' 'SwapCached: 0 kB' 'Active: 1644916 kB' 'Inactive: 4306384 kB' 'Active(anon): 1096 kB' 'Inactive(anon): 144280 kB' 'Active(file): 1643820 kB' 'Inactive(file): 4162104 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 162936 kB' 'Mapped: 68956 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312796 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71928 kB' 'KernelStack: 4992 kB' 'PageTables: 3544 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.910 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.910 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.911 00:48:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.911 00:48:38 -- setup/common.sh@33 -- # echo 0 00:06:03.911 00:48:38 -- setup/common.sh@33 -- # return 0 00:06:03.911 00:48:38 -- setup/hugepages.sh@97 -- # anon=0 00:06:03.911 00:48:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:03.911 00:48:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.911 00:48:38 -- setup/common.sh@18 -- # local node= 00:06:03.911 00:48:38 -- setup/common.sh@19 -- # local var val 00:06:03.911 00:48:38 -- setup/common.sh@20 -- # local mem_f mem 00:06:03.911 00:48:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.911 00:48:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.911 00:48:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.911 00:48:38 -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.911 00:48:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.911 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3756324 kB' 'MemAvailable: 9481812 kB' 'Buffers: 45368 kB' 'Cached: 5772216 kB' 'SwapCached: 0 kB' 'Active: 1644916 kB' 'Inactive: 4306424 kB' 'Active(anon): 1096 kB' 'Inactive(anon): 144320 kB' 'Active(file): 1643820 kB' 'Inactive(file): 4162104 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 162988 kB' 'Mapped: 68956 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312796 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71928 kB' 'KernelStack: 5040 kB' 'PageTables: 3688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.912 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.912 00:48:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.913 00:48:38 -- setup/common.sh@33 -- # echo 0 00:06:03.913 00:48:38 -- setup/common.sh@33 -- # return 0 00:06:03.913 00:48:38 -- setup/hugepages.sh@99 -- # surp=0 00:06:03.913 00:48:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:03.913 00:48:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:03.913 00:48:38 -- setup/common.sh@18 -- # local node= 00:06:03.913 00:48:38 -- setup/common.sh@19 -- # local var val 00:06:03.913 00:48:38 -- setup/common.sh@20 -- # local mem_f mem 00:06:03.913 00:48:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.913 00:48:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.913 00:48:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.913 00:48:38 -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.913 00:48:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3756324 kB' 'MemAvailable: 9481812 kB' 'Buffers: 45368 kB' 'Cached: 5772216 kB' 'SwapCached: 0 kB' 'Active: 1644908 kB' 'Inactive: 4306192 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 144088 kB' 'Active(file): 1643820 kB' 'Inactive(file): 4162104 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 162720 kB' 'Mapped: 68996 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312796 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71928 kB' 'KernelStack: 5008 kB' 'PageTables: 3616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.913 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.913 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.914 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.914 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.915 00:48:38 -- setup/common.sh@33 -- # echo 0 00:06:03.915 00:48:38 -- setup/common.sh@33 -- # return 0 00:06:03.915 00:48:38 -- setup/hugepages.sh@100 -- # resv=0 00:06:03.915 00:48:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:03.915 nr_hugepages=1024 00:06:03.915 00:48:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:03.915 resv_hugepages=0 00:06:03.915 00:48:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:03.915 surplus_hugepages=0 00:06:03.915 00:48:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:03.915 anon_hugepages=0 00:06:03.915 00:48:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.915 00:48:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:03.915 00:48:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:03.915 00:48:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:03.915 00:48:38 -- setup/common.sh@18 -- # local node= 00:06:03.915 00:48:38 -- setup/common.sh@19 -- # local var val 00:06:03.915 00:48:38 -- setup/common.sh@20 -- # local mem_f mem 00:06:03.915 00:48:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.915 00:48:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.915 00:48:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.915 00:48:38 -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.915 00:48:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3756072 kB' 'MemAvailable: 9481560 kB' 'Buffers: 45368 kB' 'Cached: 5772216 kB' 'SwapCached: 0 kB' 'Active: 1644908 kB' 'Inactive: 4306276 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 144172 kB' 'Active(file): 1643820 kB' 'Inactive(file): 4162104 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 162816 kB' 'Mapped: 68964 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312796 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71928 kB' 'KernelStack: 5060 kB' 'PageTables: 3812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.915 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.915 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # continue 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:03.916 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:03.916 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.916 00:48:38 -- setup/common.sh@33 -- # echo 1024 00:06:03.916 00:48:38 -- setup/common.sh@33 -- # return 0 00:06:03.916 00:48:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.916 00:48:38 -- setup/hugepages.sh@112 -- # get_nodes 00:06:03.916 00:48:38 -- setup/hugepages.sh@27 -- # local node 00:06:03.916 00:48:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:03.916 00:48:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:04.176 00:48:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:04.176 00:48:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.177 00:48:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.177 00:48:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.177 00:48:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:04.177 00:48:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.177 00:48:38 -- setup/common.sh@18 -- # local node=0 00:06:04.177 00:48:38 -- setup/common.sh@19 -- # local var val 00:06:04.177 00:48:38 -- setup/common.sh@20 -- # local mem_f mem 00:06:04.177 00:48:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.177 00:48:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:04.177 00:48:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:04.177 00:48:38 -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.177 00:48:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3756072 kB' 'MemUsed: 8486904 kB' 'SwapCached: 0 kB' 'Active: 1644908 kB' 'Inactive: 4305996 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143892 kB' 'Active(file): 1643820 kB' 'Inactive(file): 4162104 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 5817584 kB' 'Mapped: 68964 kB' 'AnonPages: 162584 kB' 'Shmem: 2596 kB' 'KernelStack: 5008 kB' 'PageTables: 3596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240868 kB' 'Slab: 312796 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.177 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.177 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.178 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.178 00:48:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.178 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.178 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.178 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.178 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.178 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.178 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.178 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.178 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.178 00:48:38 -- setup/common.sh@32 -- # continue 00:06:04.178 00:48:38 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.178 00:48:38 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.178 00:48:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.178 00:48:38 -- setup/common.sh@33 -- # echo 0 00:06:04.178 00:48:38 -- setup/common.sh@33 -- # return 0 00:06:04.178 00:48:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.178 00:48:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.178 00:48:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.178 00:48:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.178 00:48:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:04.178 node0=1024 expecting 1024 00:06:04.178 00:48:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:04.178 00:06:04.178 real 0m1.706s 00:06:04.178 user 0m0.289s 00:06:04.178 sys 0m1.372s 00:06:04.178 00:48:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.178 00:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.178 ************************************ 00:06:04.178 END TEST default_setup 00:06:04.178 ************************************ 00:06:04.178 00:48:38 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:04.178 00:48:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.178 00:48:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.178 00:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.178 ************************************ 00:06:04.178 START TEST per_node_1G_alloc 00:06:04.178 ************************************ 00:06:04.178 00:48:38 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:06:04.178 00:48:38 -- setup/hugepages.sh@143 -- # local IFS=, 00:06:04.178 00:48:38 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:04.178 00:48:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:04.178 00:48:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:04.178 00:48:38 -- setup/hugepages.sh@51 -- # shift 00:06:04.178 00:48:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:04.178 00:48:38 -- setup/hugepages.sh@52 -- # local node_ids 00:06:04.178 00:48:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.178 00:48:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:04.178 00:48:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:04.178 00:48:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:04.178 00:48:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.178 00:48:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:04.178 00:48:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:04.178 00:48:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.178 00:48:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.178 00:48:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:04.178 00:48:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:04.178 00:48:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:04.178 00:48:38 -- setup/hugepages.sh@73 -- # return 0 00:06:04.178 00:48:38 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:04.178 00:48:38 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:04.178 00:48:38 -- setup/hugepages.sh@146 -- # setup output 00:06:04.178 00:48:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.178 00:48:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:04.436 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.010 00:48:39 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:05.010 00:48:39 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:05.010 00:48:39 -- setup/hugepages.sh@89 -- # local node 00:06:05.010 00:48:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:05.010 00:48:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:05.010 00:48:39 -- setup/hugepages.sh@92 -- # local surp 00:06:05.010 00:48:39 -- setup/hugepages.sh@93 -- # local resv 00:06:05.010 00:48:39 -- setup/hugepages.sh@94 -- # local anon 00:06:05.010 00:48:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:05.010 00:48:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:05.010 00:48:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:05.010 00:48:39 -- setup/common.sh@18 -- # local node= 00:06:05.010 00:48:39 -- setup/common.sh@19 -- # local var val 00:06:05.010 00:48:39 -- setup/common.sh@20 -- # local mem_f mem 00:06:05.010 00:48:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.010 00:48:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.010 00:48:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.010 00:48:39 -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.010 00:48:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4805144 kB' 'MemAvailable: 10530640 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644980 kB' 'Inactive: 4306240 kB' 'Active(anon): 1096 kB' 'Inactive(anon): 144192 kB' 'Active(file): 1643884 kB' 'Inactive(file): 4162048 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 162700 kB' 'Mapped: 68960 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312724 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 5076 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20588 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.010 00:48:39 -- setup/common.sh@33 -- # echo 0 00:06:05.010 00:48:39 -- setup/common.sh@33 -- # return 0 00:06:05.010 00:48:39 -- setup/hugepages.sh@97 -- # anon=0 00:06:05.010 00:48:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:05.010 00:48:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.010 00:48:39 -- setup/common.sh@18 -- # local node= 00:06:05.010 00:48:39 -- setup/common.sh@19 -- # local var val 00:06:05.010 00:48:39 -- setup/common.sh@20 -- # local mem_f mem 00:06:05.010 00:48:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.010 00:48:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.010 00:48:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.010 00:48:39 -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.010 00:48:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4805408 kB' 'MemAvailable: 10530904 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644972 kB' 'Inactive: 4306012 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143964 kB' 'Active(file): 1643884 kB' 'Inactive(file): 4162048 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 162968 kB' 'Mapped: 68956 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312724 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 5040 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.010 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.010 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.011 00:48:39 -- setup/common.sh@33 -- # echo 0 00:06:05.011 00:48:39 -- setup/common.sh@33 -- # return 0 00:06:05.011 00:48:39 -- setup/hugepages.sh@99 -- # surp=0 00:06:05.011 00:48:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:05.011 00:48:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:05.011 00:48:39 -- setup/common.sh@18 -- # local node= 00:06:05.011 00:48:39 -- setup/common.sh@19 -- # local var val 00:06:05.011 00:48:39 -- setup/common.sh@20 -- # local mem_f mem 00:06:05.011 00:48:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.011 00:48:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.011 00:48:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.011 00:48:39 -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.011 00:48:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4805900 kB' 'MemAvailable: 10531396 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644964 kB' 'Inactive: 4306104 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 144056 kB' 'Active(file): 1643884 kB' 'Inactive(file): 4162048 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 162884 kB' 'Mapped: 68960 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312756 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 5012 kB' 'PageTables: 3860 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20588 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.011 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.011 00:48:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.272 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.272 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.272 00:48:39 -- setup/common.sh@33 -- # echo 0 00:06:05.272 00:48:39 -- setup/common.sh@33 -- # return 0 00:06:05.272 00:48:39 -- setup/hugepages.sh@100 -- # resv=0 00:06:05.272 00:48:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:05.272 nr_hugepages=512 00:06:05.272 00:48:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:05.272 resv_hugepages=0 00:06:05.272 00:48:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:05.272 surplus_hugepages=0 00:06:05.272 00:48:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:05.272 anon_hugepages=0 00:06:05.272 00:48:39 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:05.272 00:48:39 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:05.272 00:48:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:05.272 00:48:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:05.272 00:48:39 -- setup/common.sh@18 -- # local node= 00:06:05.272 00:48:39 -- setup/common.sh@19 -- # local var val 00:06:05.272 00:48:39 -- setup/common.sh@20 -- # local mem_f mem 00:06:05.272 00:48:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.272 00:48:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.273 00:48:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.273 00:48:39 -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.273 00:48:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4805900 kB' 'MemAvailable: 10531400 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644964 kB' 'Inactive: 4306408 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 144356 kB' 'Active(file): 1643884 kB' 'Inactive(file): 4162052 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 162896 kB' 'Mapped: 68960 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312756 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 5012 kB' 'PageTables: 3860 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20604 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.273 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.273 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.274 00:48:39 -- setup/common.sh@33 -- # echo 512 00:06:05.274 00:48:39 -- setup/common.sh@33 -- # return 0 00:06:05.274 00:48:39 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:05.274 00:48:39 -- setup/hugepages.sh@112 -- # get_nodes 00:06:05.274 00:48:39 -- setup/hugepages.sh@27 -- # local node 00:06:05.274 00:48:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.274 00:48:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:05.274 00:48:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:05.274 00:48:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:05.274 00:48:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.274 00:48:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.274 00:48:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:05.274 00:48:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.274 00:48:39 -- setup/common.sh@18 -- # local node=0 00:06:05.274 00:48:39 -- setup/common.sh@19 -- # local var val 00:06:05.274 00:48:39 -- setup/common.sh@20 -- # local mem_f mem 00:06:05.274 00:48:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.274 00:48:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:05.274 00:48:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:05.274 00:48:39 -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.274 00:48:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4806656 kB' 'MemUsed: 7436320 kB' 'SwapCached: 0 kB' 'Active: 1644964 kB' 'Inactive: 4306400 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 144348 kB' 'Active(file): 1643884 kB' 'Inactive(file): 4162052 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 5817596 kB' 'Mapped: 68960 kB' 'AnonPages: 163056 kB' 'Shmem: 2596 kB' 'KernelStack: 5036 kB' 'PageTables: 3572 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240868 kB' 'Slab: 312748 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.274 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.274 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # continue 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # IFS=': ' 00:06:05.275 00:48:39 -- setup/common.sh@31 -- # read -r var val _ 00:06:05.275 00:48:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.275 00:48:39 -- setup/common.sh@33 -- # echo 0 00:06:05.275 00:48:39 -- setup/common.sh@33 -- # return 0 00:06:05.275 00:48:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:05.275 00:48:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:05.275 00:48:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:05.275 00:48:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:05.275 00:48:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:05.275 node0=512 expecting 512 00:06:05.275 00:48:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:05.275 00:06:05.275 real 0m1.104s 00:06:05.275 user 0m0.316s 00:06:05.275 sys 0m0.760s 00:06:05.275 00:48:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.275 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.275 ************************************ 00:06:05.275 END TEST per_node_1G_alloc 00:06:05.275 ************************************ 00:06:05.275 00:48:39 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:05.275 00:48:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.275 00:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.275 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.275 ************************************ 00:06:05.275 START TEST even_2G_alloc 00:06:05.275 ************************************ 00:06:05.275 00:48:39 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:06:05.275 00:48:39 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:05.275 00:48:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:05.275 00:48:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:05.275 00:48:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:05.275 00:48:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:05.275 00:48:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:05.275 00:48:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:05.275 00:48:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:05.275 00:48:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:05.275 00:48:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:05.275 00:48:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:05.275 00:48:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:05.275 00:48:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:05.275 00:48:39 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:05.275 00:48:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.275 00:48:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:05.275 00:48:39 -- setup/hugepages.sh@83 -- # : 0 00:06:05.275 00:48:39 -- setup/hugepages.sh@84 -- # : 0 00:06:05.275 00:48:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.275 00:48:39 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:05.275 00:48:39 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:05.275 00:48:39 -- setup/hugepages.sh@153 -- # setup output 00:06:05.275 00:48:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.275 00:48:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.844 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:05.844 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:06.790 00:48:40 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:06.790 00:48:40 -- setup/hugepages.sh@89 -- # local node 00:06:06.790 00:48:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:06.790 00:48:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:06.790 00:48:40 -- setup/hugepages.sh@92 -- # local surp 00:06:06.790 00:48:40 -- setup/hugepages.sh@93 -- # local resv 00:06:06.790 00:48:40 -- setup/hugepages.sh@94 -- # local anon 00:06:06.790 00:48:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.790 00:48:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:06.790 00:48:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.790 00:48:40 -- setup/common.sh@18 -- # local node= 00:06:06.790 00:48:40 -- setup/common.sh@19 -- # local var val 00:06:06.790 00:48:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:06.790 00:48:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.790 00:48:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.790 00:48:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.790 00:48:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.790 00:48:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3755064 kB' 'MemAvailable: 9480564 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644988 kB' 'Inactive: 4303472 kB' 'Active(anon): 1096 kB' 'Inactive(anon): 141428 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 159932 kB' 'Mapped: 68396 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312764 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71896 kB' 'KernelStack: 5024 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.790 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.790 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.791 00:48:40 -- setup/common.sh@33 -- # echo 0 00:06:06.791 00:48:40 -- setup/common.sh@33 -- # return 0 00:06:06.791 00:48:40 -- setup/hugepages.sh@97 -- # anon=0 00:06:06.791 00:48:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:06.791 00:48:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.791 00:48:40 -- setup/common.sh@18 -- # local node= 00:06:06.791 00:48:40 -- setup/common.sh@19 -- # local var val 00:06:06.791 00:48:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:06.791 00:48:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.791 00:48:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.791 00:48:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.791 00:48:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.791 00:48:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3755548 kB' 'MemAvailable: 9481048 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303164 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141120 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 159784 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312780 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71912 kB' 'KernelStack: 4896 kB' 'PageTables: 3276 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.791 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.791 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.792 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.792 00:48:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.793 00:48:40 -- setup/common.sh@33 -- # echo 0 00:06:06.793 00:48:40 -- setup/common.sh@33 -- # return 0 00:06:06.793 00:48:40 -- setup/hugepages.sh@99 -- # surp=0 00:06:06.793 00:48:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:06.793 00:48:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:06.793 00:48:40 -- setup/common.sh@18 -- # local node= 00:06:06.793 00:48:40 -- setup/common.sh@19 -- # local var val 00:06:06.793 00:48:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:06.793 00:48:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.793 00:48:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.793 00:48:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.793 00:48:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.793 00:48:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.793 00:48:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3755500 kB' 'MemAvailable: 9481000 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303148 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141104 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 159748 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312780 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71912 kB' 'KernelStack: 4864 kB' 'PageTables: 3196 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:40 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.793 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.793 00:48:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.794 00:48:41 -- setup/common.sh@33 -- # echo 0 00:06:06.794 00:48:41 -- setup/common.sh@33 -- # return 0 00:06:06.794 00:48:41 -- setup/hugepages.sh@100 -- # resv=0 00:06:06.794 00:48:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:06.794 nr_hugepages=1024 00:06:06.794 00:48:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:06.794 resv_hugepages=0 00:06:06.794 00:48:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:06.794 surplus_hugepages=0 00:06:06.794 00:48:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:06.794 anon_hugepages=0 00:06:06.794 00:48:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.794 00:48:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:06.794 00:48:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:06.794 00:48:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:06.794 00:48:41 -- setup/common.sh@18 -- # local node= 00:06:06.794 00:48:41 -- setup/common.sh@19 -- # local var val 00:06:06.794 00:48:41 -- setup/common.sh@20 -- # local mem_f mem 00:06:06.794 00:48:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.794 00:48:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.794 00:48:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.794 00:48:41 -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.794 00:48:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3755752 kB' 'MemAvailable: 9481252 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303096 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141052 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 159696 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312780 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71912 kB' 'KernelStack: 4916 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.794 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.794 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.795 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.795 00:48:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.795 00:48:41 -- setup/common.sh@33 -- # echo 1024 00:06:06.795 00:48:41 -- setup/common.sh@33 -- # return 0 00:06:06.795 00:48:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.795 00:48:41 -- setup/hugepages.sh@112 -- # get_nodes 00:06:06.795 00:48:41 -- setup/hugepages.sh@27 -- # local node 00:06:06.795 00:48:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.795 00:48:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:06.795 00:48:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:06.795 00:48:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:06.795 00:48:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.795 00:48:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.795 00:48:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:06.795 00:48:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.795 00:48:41 -- setup/common.sh@18 -- # local node=0 00:06:06.795 00:48:41 -- setup/common.sh@19 -- # local var val 00:06:06.795 00:48:41 -- setup/common.sh@20 -- # local mem_f mem 00:06:06.795 00:48:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.795 00:48:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:06.795 00:48:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:06.795 00:48:41 -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.796 00:48:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3755752 kB' 'MemUsed: 8487224 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303384 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141340 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 5817596 kB' 'Mapped: 68196 kB' 'AnonPages: 159972 kB' 'Shmem: 2596 kB' 'KernelStack: 4968 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240868 kB' 'Slab: 312780 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # continue 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:06.796 00:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:06.796 00:48:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.796 00:48:41 -- setup/common.sh@33 -- # echo 0 00:06:06.796 00:48:41 -- setup/common.sh@33 -- # return 0 00:06:06.796 00:48:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.796 00:48:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.796 00:48:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.796 00:48:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.796 00:48:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:06.796 node0=1024 expecting 1024 00:06:06.796 00:48:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:06.796 00:06:06.796 real 0m1.513s 00:06:06.796 user 0m0.346s 00:06:06.796 sys 0m1.144s 00:06:06.797 00:48:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.797 00:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:06.797 ************************************ 00:06:06.797 END TEST even_2G_alloc 00:06:06.797 ************************************ 00:06:06.797 00:48:41 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:06.797 00:48:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.797 00:48:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.797 00:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:06.797 ************************************ 00:06:06.797 START TEST odd_alloc 00:06:06.797 ************************************ 00:06:06.797 00:48:41 -- common/autotest_common.sh@1114 -- # odd_alloc 00:06:06.797 00:48:41 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:06.797 00:48:41 -- setup/hugepages.sh@49 -- # local size=2098176 00:06:06.797 00:48:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:06.797 00:48:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:06.797 00:48:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:06.797 00:48:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:06.797 00:48:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:06.797 00:48:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:06.797 00:48:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:06.797 00:48:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:06.797 00:48:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:06.797 00:48:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:06.797 00:48:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:06.797 00:48:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:06.797 00:48:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.797 00:48:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:06.797 00:48:41 -- setup/hugepages.sh@83 -- # : 0 00:06:06.797 00:48:41 -- setup/hugepages.sh@84 -- # : 0 00:06:06.797 00:48:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.797 00:48:41 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:07.056 00:48:41 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:07.056 00:48:41 -- setup/hugepages.sh@160 -- # setup output 00:06:07.056 00:48:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.056 00:48:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:07.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:07.315 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.256 00:48:42 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:08.256 00:48:42 -- setup/hugepages.sh@89 -- # local node 00:06:08.256 00:48:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:08.256 00:48:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:08.256 00:48:42 -- setup/hugepages.sh@92 -- # local surp 00:06:08.256 00:48:42 -- setup/hugepages.sh@93 -- # local resv 00:06:08.256 00:48:42 -- setup/hugepages.sh@94 -- # local anon 00:06:08.256 00:48:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:08.256 00:48:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:08.256 00:48:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:08.256 00:48:42 -- setup/common.sh@18 -- # local node= 00:06:08.256 00:48:42 -- setup/common.sh@19 -- # local var val 00:06:08.256 00:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:08.256 00:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.256 00:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.256 00:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.256 00:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.256 00:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.256 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.256 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3747004 kB' 'MemAvailable: 9472504 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644984 kB' 'Inactive: 4303412 kB' 'Active(anon): 1092 kB' 'Inactive(anon): 141368 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 4 kB' 'AnonPages: 159808 kB' 'Mapped: 68232 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312860 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 4912 kB' 'PageTables: 3344 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.257 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.257 00:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.257 00:48:42 -- setup/common.sh@33 -- # echo 0 00:06:08.257 00:48:42 -- setup/common.sh@33 -- # return 0 00:06:08.257 00:48:42 -- setup/hugepages.sh@97 -- # anon=0 00:06:08.257 00:48:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:08.257 00:48:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:08.257 00:48:42 -- setup/common.sh@18 -- # local node= 00:06:08.257 00:48:42 -- setup/common.sh@19 -- # local var val 00:06:08.258 00:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:08.258 00:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.258 00:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.258 00:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.258 00:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.258 00:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3747256 kB' 'MemAvailable: 9472756 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303064 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141020 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 159688 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312860 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 4912 kB' 'PageTables: 3324 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.258 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.258 00:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.259 00:48:42 -- setup/common.sh@33 -- # echo 0 00:06:08.259 00:48:42 -- setup/common.sh@33 -- # return 0 00:06:08.259 00:48:42 -- setup/hugepages.sh@99 -- # surp=0 00:06:08.259 00:48:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:08.259 00:48:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:08.259 00:48:42 -- setup/common.sh@18 -- # local node= 00:06:08.259 00:48:42 -- setup/common.sh@19 -- # local var val 00:06:08.259 00:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:08.259 00:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.259 00:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.259 00:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.259 00:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.259 00:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3747256 kB' 'MemAvailable: 9472756 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303052 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141008 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 159680 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312860 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 4928 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.259 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.259 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.260 00:48:42 -- setup/common.sh@33 -- # echo 0 00:06:08.260 00:48:42 -- setup/common.sh@33 -- # return 0 00:06:08.260 nr_hugepages=1025 00:06:08.260 resv_hugepages=0 00:06:08.260 surplus_hugepages=0 00:06:08.260 anon_hugepages=0 00:06:08.260 00:48:42 -- setup/hugepages.sh@100 -- # resv=0 00:06:08.260 00:48:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:08.260 00:48:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:08.260 00:48:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:08.260 00:48:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:08.260 00:48:42 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:08.260 00:48:42 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:08.260 00:48:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:08.260 00:48:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:08.260 00:48:42 -- setup/common.sh@18 -- # local node= 00:06:08.260 00:48:42 -- setup/common.sh@19 -- # local var val 00:06:08.260 00:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:08.260 00:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.260 00:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.260 00:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.260 00:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.260 00:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3747004 kB' 'MemAvailable: 9472504 kB' 'Buffers: 45376 kB' 'Cached: 5772220 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303084 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141040 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 159760 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312860 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 4912 kB' 'PageTables: 3316 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.260 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.260 00:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.261 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.261 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.262 00:48:42 -- setup/common.sh@33 -- # echo 1025 00:06:08.262 00:48:42 -- setup/common.sh@33 -- # return 0 00:06:08.262 00:48:42 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:08.262 00:48:42 -- setup/hugepages.sh@112 -- # get_nodes 00:06:08.262 00:48:42 -- setup/hugepages.sh@27 -- # local node 00:06:08.262 00:48:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:08.262 00:48:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:08.262 00:48:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:08.262 00:48:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:08.262 00:48:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:08.262 00:48:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:08.262 00:48:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:08.262 00:48:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:08.262 00:48:42 -- setup/common.sh@18 -- # local node=0 00:06:08.262 00:48:42 -- setup/common.sh@19 -- # local var val 00:06:08.262 00:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:08.262 00:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.262 00:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:08.262 00:48:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:08.262 00:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.262 00:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3747004 kB' 'MemUsed: 8495972 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303288 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141244 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5817596 kB' 'Mapped: 68196 kB' 'AnonPages: 159940 kB' 'Shmem: 2596 kB' 'KernelStack: 4948 kB' 'PageTables: 3236 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240868 kB' 'Slab: 312860 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.262 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.262 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.263 00:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.263 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.263 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.263 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.263 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.263 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.263 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.263 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.263 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.263 00:48:42 -- setup/common.sh@32 -- # continue 00:06:08.263 00:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:08.263 00:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:08.263 00:48:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.263 00:48:42 -- setup/common.sh@33 -- # echo 0 00:06:08.263 00:48:42 -- setup/common.sh@33 -- # return 0 00:06:08.522 00:48:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:08.522 00:48:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:08.522 00:48:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:08.522 00:48:42 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:08.522 node0=1025 expecting 1025 00:06:08.522 00:48:42 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:08.522 00:06:08.522 real 0m1.480s 00:06:08.522 user 0m0.316s 00:06:08.522 sys 0m1.167s 00:06:08.522 00:48:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.522 00:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:08.522 ************************************ 00:06:08.522 END TEST odd_alloc 00:06:08.522 ************************************ 00:06:08.522 00:48:42 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:08.522 00:48:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.522 00:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.522 00:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:08.522 ************************************ 00:06:08.522 START TEST custom_alloc 00:06:08.522 ************************************ 00:06:08.522 00:48:42 -- common/autotest_common.sh@1114 -- # custom_alloc 00:06:08.522 00:48:42 -- setup/hugepages.sh@167 -- # local IFS=, 00:06:08.522 00:48:42 -- setup/hugepages.sh@169 -- # local node 00:06:08.522 00:48:42 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:08.522 00:48:42 -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:08.522 00:48:42 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:08.522 00:48:42 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:08.522 00:48:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:08.522 00:48:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:08.522 00:48:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:08.522 00:48:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:08.522 00:48:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:08.522 00:48:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:08.522 00:48:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:08.522 00:48:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:08.522 00:48:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:08.522 00:48:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:08.522 00:48:42 -- setup/hugepages.sh@83 -- # : 0 00:06:08.522 00:48:42 -- setup/hugepages.sh@84 -- # : 0 00:06:08.522 00:48:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:08.522 00:48:42 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:08.522 00:48:42 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:08.522 00:48:42 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:08.522 00:48:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:08.522 00:48:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:08.522 00:48:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:08.522 00:48:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:08.522 00:48:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:08.522 00:48:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:08.522 00:48:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:08.522 00:48:42 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:08.522 00:48:42 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:08.522 00:48:42 -- setup/hugepages.sh@78 -- # return 0 00:06:08.522 00:48:42 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:08.522 00:48:42 -- setup/hugepages.sh@187 -- # setup output 00:06:08.522 00:48:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.522 00:48:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:08.781 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:09.352 00:48:43 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:09.352 00:48:43 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:09.352 00:48:43 -- setup/hugepages.sh@89 -- # local node 00:06:09.352 00:48:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:09.352 00:48:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:09.352 00:48:43 -- setup/hugepages.sh@92 -- # local surp 00:06:09.352 00:48:43 -- setup/hugepages.sh@93 -- # local resv 00:06:09.352 00:48:43 -- setup/hugepages.sh@94 -- # local anon 00:06:09.352 00:48:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:09.352 00:48:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:09.352 00:48:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:09.352 00:48:43 -- setup/common.sh@18 -- # local node= 00:06:09.352 00:48:43 -- setup/common.sh@19 -- # local var val 00:06:09.352 00:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:09.352 00:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.352 00:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.352 00:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.352 00:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.352 00:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4797848 kB' 'MemAvailable: 10523348 kB' 'Buffers: 45376 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1644984 kB' 'Inactive: 4303276 kB' 'Active(anon): 1092 kB' 'Inactive(anon): 141232 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162044 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 159876 kB' 'Mapped: 68220 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312820 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71952 kB' 'KernelStack: 4912 kB' 'PageTables: 3332 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.352 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.352 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.353 00:48:43 -- setup/common.sh@33 -- # echo 0 00:06:09.353 00:48:43 -- setup/common.sh@33 -- # return 0 00:06:09.353 00:48:43 -- setup/hugepages.sh@97 -- # anon=0 00:06:09.353 00:48:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:09.353 00:48:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:09.353 00:48:43 -- setup/common.sh@18 -- # local node= 00:06:09.353 00:48:43 -- setup/common.sh@19 -- # local var val 00:06:09.353 00:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:09.353 00:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.353 00:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.353 00:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.353 00:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.353 00:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4797848 kB' 'MemAvailable: 10523352 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303212 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141164 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162048 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159832 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312820 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71952 kB' 'KernelStack: 4912 kB' 'PageTables: 3324 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.353 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.353 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.354 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.354 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.355 00:48:43 -- setup/common.sh@33 -- # echo 0 00:06:09.355 00:48:43 -- setup/common.sh@33 -- # return 0 00:06:09.355 00:48:43 -- setup/hugepages.sh@99 -- # surp=0 00:06:09.355 00:48:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:09.355 00:48:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:09.355 00:48:43 -- setup/common.sh@18 -- # local node= 00:06:09.355 00:48:43 -- setup/common.sh@19 -- # local var val 00:06:09.355 00:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:09.355 00:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.355 00:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.355 00:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.355 00:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.355 00:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4797848 kB' 'MemAvailable: 10523352 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303172 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141124 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162048 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159792 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312820 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71952 kB' 'KernelStack: 4896 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 503120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.355 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.355 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.356 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.356 00:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.617 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.617 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.617 00:48:43 -- setup/common.sh@33 -- # echo 0 00:06:09.617 00:48:43 -- setup/common.sh@33 -- # return 0 00:06:09.617 00:48:43 -- setup/hugepages.sh@100 -- # resv=0 00:06:09.617 nr_hugepages=512 00:06:09.617 00:48:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:09.617 resv_hugepages=0 00:06:09.617 00:48:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:09.617 surplus_hugepages=0 00:06:09.617 00:48:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:09.617 anon_hugepages=0 00:06:09.617 00:48:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:09.617 00:48:43 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:09.617 00:48:43 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:09.617 00:48:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:09.617 00:48:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:09.617 00:48:43 -- setup/common.sh@18 -- # local node= 00:06:09.617 00:48:43 -- setup/common.sh@19 -- # local var val 00:06:09.617 00:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:09.617 00:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.617 00:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.617 00:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.617 00:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.618 00:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4797848 kB' 'MemAvailable: 10523352 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303628 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141580 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162048 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 160256 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312820 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71952 kB' 'KernelStack: 4944 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 504544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.618 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.618 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.619 00:48:43 -- setup/common.sh@33 -- # echo 512 00:06:09.619 00:48:43 -- setup/common.sh@33 -- # return 0 00:06:09.619 00:48:43 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:09.619 00:48:43 -- setup/hugepages.sh@112 -- # get_nodes 00:06:09.619 00:48:43 -- setup/hugepages.sh@27 -- # local node 00:06:09.619 00:48:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:09.619 00:48:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:09.619 00:48:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:09.619 00:48:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:09.619 00:48:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:09.619 00:48:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:09.619 00:48:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:09.619 00:48:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:09.619 00:48:43 -- setup/common.sh@18 -- # local node=0 00:06:09.619 00:48:43 -- setup/common.sh@19 -- # local var val 00:06:09.619 00:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:09.619 00:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.619 00:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:09.619 00:48:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:09.619 00:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.619 00:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4797848 kB' 'MemUsed: 7445128 kB' 'SwapCached: 0 kB' 'Active: 1644976 kB' 'Inactive: 4303268 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141220 kB' 'Active(file): 1643892 kB' 'Inactive(file): 4162048 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'FilePages: 5817608 kB' 'Mapped: 68200 kB' 'AnonPages: 159592 kB' 'Shmem: 2596 kB' 'KernelStack: 5012 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240868 kB' 'Slab: 312820 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.619 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.619 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # continue 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:09.620 00:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:09.620 00:48:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.620 00:48:43 -- setup/common.sh@33 -- # echo 0 00:06:09.620 00:48:43 -- setup/common.sh@33 -- # return 0 00:06:09.620 00:48:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:09.620 00:48:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:09.620 00:48:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:09.620 00:48:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:09.620 node0=512 expecting 512 00:06:09.620 00:48:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:09.620 00:48:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:09.620 00:06:09.620 real 0m1.080s 00:06:09.620 user 0m0.303s 00:06:09.620 sys 0m0.789s 00:06:09.620 00:48:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.620 00:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.620 ************************************ 00:06:09.620 END TEST custom_alloc 00:06:09.620 ************************************ 00:06:09.620 00:48:43 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:09.620 00:48:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.620 00:48:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.620 00:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.620 ************************************ 00:06:09.620 START TEST no_shrink_alloc 00:06:09.620 ************************************ 00:06:09.620 00:48:43 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:06:09.620 00:48:43 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:09.620 00:48:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:09.620 00:48:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:09.620 00:48:43 -- setup/hugepages.sh@51 -- # shift 00:06:09.620 00:48:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:09.620 00:48:43 -- setup/hugepages.sh@52 -- # local node_ids 00:06:09.620 00:48:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:09.620 00:48:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:09.620 00:48:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:09.620 00:48:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:09.620 00:48:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:09.620 00:48:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:09.620 00:48:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:09.620 00:48:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:09.620 00:48:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:09.620 00:48:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:09.620 00:48:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:09.620 00:48:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:09.620 00:48:43 -- setup/hugepages.sh@73 -- # return 0 00:06:09.620 00:48:43 -- setup/hugepages.sh@198 -- # setup output 00:06:09.620 00:48:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.620 00:48:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:09.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:09.902 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:10.907 00:48:45 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:10.907 00:48:45 -- setup/hugepages.sh@89 -- # local node 00:06:10.907 00:48:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:10.907 00:48:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:10.907 00:48:45 -- setup/hugepages.sh@92 -- # local surp 00:06:10.907 00:48:45 -- setup/hugepages.sh@93 -- # local resv 00:06:10.907 00:48:45 -- setup/hugepages.sh@94 -- # local anon 00:06:10.907 00:48:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:10.907 00:48:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:10.907 00:48:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:10.907 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:10.907 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:10.907 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.907 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.907 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.907 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.907 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.907 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3749400 kB' 'MemAvailable: 9474912 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645032 kB' 'Inactive: 4303416 kB' 'Active(anon): 1100 kB' 'Inactive(anon): 141400 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159996 kB' 'Mapped: 68232 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312668 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 4944 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.907 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.907 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.908 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:10.908 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:10.908 00:48:45 -- setup/hugepages.sh@97 -- # anon=0 00:06:10.908 00:48:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:10.908 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.908 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:10.908 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:10.908 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.908 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.908 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.908 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.908 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.908 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3749400 kB' 'MemAvailable: 9474912 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645032 kB' 'Inactive: 4303120 kB' 'Active(anon): 1100 kB' 'Inactive(anon): 141104 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159700 kB' 'Mapped: 68232 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312668 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 4928 kB' 'PageTables: 3360 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.908 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.908 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.909 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.909 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.909 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:10.909 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:10.909 00:48:45 -- setup/hugepages.sh@99 -- # surp=0 00:06:10.909 00:48:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:10.909 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:10.909 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:10.909 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:10.909 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.909 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.909 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.909 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.909 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.909 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.910 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3749652 kB' 'MemAvailable: 9475164 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645024 kB' 'Inactive: 4303240 kB' 'Active(anon): 1092 kB' 'Inactive(anon): 141224 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159784 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312668 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 4928 kB' 'PageTables: 3344 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.910 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.910 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.911 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:10.911 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:10.911 00:48:45 -- setup/hugepages.sh@100 -- # resv=0 00:06:10.911 nr_hugepages=1024 00:06:10.911 00:48:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:10.911 resv_hugepages=0 00:06:10.911 00:48:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:10.911 surplus_hugepages=0 00:06:10.911 00:48:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:10.911 anon_hugepages=0 00:06:10.911 00:48:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:10.911 00:48:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.911 00:48:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:10.911 00:48:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:10.911 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:10.911 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:10.911 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:10.911 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.911 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.911 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.911 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.911 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.911 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3749652 kB' 'MemAvailable: 9475164 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645024 kB' 'Inactive: 4303280 kB' 'Active(anon): 1092 kB' 'Inactive(anon): 141264 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159828 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312668 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 4948 kB' 'PageTables: 3228 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.911 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.911 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.912 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.912 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.913 00:48:45 -- setup/common.sh@33 -- # echo 1024 00:06:10.913 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:10.913 00:48:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.913 00:48:45 -- setup/hugepages.sh@112 -- # get_nodes 00:06:10.913 00:48:45 -- setup/hugepages.sh@27 -- # local node 00:06:10.913 00:48:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.913 00:48:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:10.913 00:48:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:10.913 00:48:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:10.913 00:48:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:10.913 00:48:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:10.913 00:48:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:10.913 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.913 00:48:45 -- setup/common.sh@18 -- # local node=0 00:06:10.913 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:10.913 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.913 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.913 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:10.913 00:48:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:10.913 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.913 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3751468 kB' 'MemUsed: 8491508 kB' 'SwapCached: 0 kB' 'Active: 1645016 kB' 'Inactive: 4303392 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141376 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'FilePages: 5817608 kB' 'Mapped: 68196 kB' 'AnonPages: 159968 kB' 'Shmem: 2596 kB' 'KernelStack: 4944 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240868 kB' 'Slab: 312668 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.913 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.913 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.914 00:48:45 -- setup/common.sh@32 -- # continue 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.914 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.173 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.173 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.174 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.174 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.174 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:11.174 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:11.174 00:48:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:11.174 00:48:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:11.174 00:48:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:11.174 00:48:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:11.174 node0=1024 expecting 1024 00:06:11.174 00:48:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:11.174 00:48:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:11.174 00:48:45 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:11.174 00:48:45 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:11.174 00:48:45 -- setup/hugepages.sh@202 -- # setup output 00:06:11.174 00:48:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.174 00:48:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:11.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:11.435 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:11.435 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:11.435 00:48:45 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:11.435 00:48:45 -- setup/hugepages.sh@89 -- # local node 00:06:11.435 00:48:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:11.435 00:48:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:11.435 00:48:45 -- setup/hugepages.sh@92 -- # local surp 00:06:11.435 00:48:45 -- setup/hugepages.sh@93 -- # local resv 00:06:11.435 00:48:45 -- setup/hugepages.sh@94 -- # local anon 00:06:11.435 00:48:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:11.435 00:48:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:11.435 00:48:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:11.435 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:11.435 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:11.435 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:11.435 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.435 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.435 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.435 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.435 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3746704 kB' 'MemAvailable: 9472216 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645024 kB' 'Inactive: 4304120 kB' 'Active(anon): 1092 kB' 'Inactive(anon): 142104 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 160256 kB' 'Mapped: 68208 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312668 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 5056 kB' 'PageTables: 3812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.435 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.435 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.436 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:11.436 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:11.436 00:48:45 -- setup/hugepages.sh@97 -- # anon=0 00:06:11.436 00:48:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:11.436 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:11.436 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:11.436 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:11.436 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:11.436 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.436 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.436 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.436 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.436 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.436 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3746956 kB' 'MemAvailable: 9472468 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645024 kB' 'Inactive: 4303640 kB' 'Active(anon): 1092 kB' 'Inactive(anon): 141624 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159976 kB' 'Mapped: 68208 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312676 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71808 kB' 'KernelStack: 5008 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.436 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.436 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.437 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.437 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.438 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:11.438 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:11.438 00:48:45 -- setup/hugepages.sh@99 -- # surp=0 00:06:11.438 00:48:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:11.438 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:11.438 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:11.438 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:11.438 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:11.438 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.438 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.438 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.438 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.438 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3747208 kB' 'MemAvailable: 9472720 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645016 kB' 'Inactive: 4303116 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141100 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 159720 kB' 'Mapped: 68196 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312828 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71960 kB' 'KernelStack: 4956 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.438 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.438 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.439 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.439 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.439 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:11.439 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:11.439 00:48:45 -- setup/hugepages.sh@100 -- # resv=0 00:06:11.439 nr_hugepages=1024 00:06:11.439 00:48:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:11.439 resv_hugepages=0 00:06:11.439 00:48:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:11.439 surplus_hugepages=0 00:06:11.439 00:48:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:11.439 anon_hugepages=0 00:06:11.439 00:48:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:11.439 00:48:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:11.439 00:48:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:11.701 00:48:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:11.701 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:11.701 00:48:45 -- setup/common.sh@18 -- # local node= 00:06:11.701 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:11.701 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:11.701 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.701 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.701 00:48:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.701 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.701 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3746968 kB' 'MemAvailable: 9472480 kB' 'Buffers: 45384 kB' 'Cached: 5772224 kB' 'SwapCached: 0 kB' 'Active: 1645016 kB' 'Inactive: 4303296 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141280 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 160184 kB' 'Mapped: 68716 kB' 'Shmem: 2596 kB' 'KReclaimable: 240868 kB' 'Slab: 312828 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71960 kB' 'KernelStack: 5040 kB' 'PageTables: 3680 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 505780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.701 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.701 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:11.702 00:48:45 -- setup/common.sh@33 -- # echo 1024 00:06:11.702 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:11.702 00:48:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:11.702 00:48:45 -- setup/hugepages.sh@112 -- # get_nodes 00:06:11.702 00:48:45 -- setup/hugepages.sh@27 -- # local node 00:06:11.702 00:48:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:11.702 00:48:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:11.702 00:48:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:11.702 00:48:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:11.702 00:48:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:11.702 00:48:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:11.702 00:48:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:11.702 00:48:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:11.702 00:48:45 -- setup/common.sh@18 -- # local node=0 00:06:11.702 00:48:45 -- setup/common.sh@19 -- # local var val 00:06:11.702 00:48:45 -- setup/common.sh@20 -- # local mem_f mem 00:06:11.702 00:48:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.702 00:48:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:11.702 00:48:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:11.702 00:48:45 -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.702 00:48:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3746968 kB' 'MemUsed: 8496008 kB' 'SwapCached: 0 kB' 'Active: 1645016 kB' 'Inactive: 4303556 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 141540 kB' 'Active(file): 1643932 kB' 'Inactive(file): 4162016 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'FilePages: 5817608 kB' 'Mapped: 68456 kB' 'AnonPages: 160184 kB' 'Shmem: 2596 kB' 'KernelStack: 5040 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240868 kB' 'Slab: 312828 kB' 'SReclaimable: 240868 kB' 'SUnreclaim: 71960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.702 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.702 00:48:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # continue 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.703 00:48:45 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.703 00:48:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.703 00:48:45 -- setup/common.sh@33 -- # echo 0 00:06:11.703 00:48:45 -- setup/common.sh@33 -- # return 0 00:06:11.703 00:48:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:11.703 00:48:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:11.703 00:48:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:11.703 00:48:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:11.703 node0=1024 expecting 1024 00:06:11.703 00:48:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:11.703 00:48:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:11.703 00:06:11.703 real 0m2.008s 00:06:11.703 user 0m0.637s 00:06:11.703 sys 0m1.476s 00:06:11.703 00:48:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.703 00:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:11.703 ************************************ 00:06:11.703 END TEST no_shrink_alloc 00:06:11.703 ************************************ 00:06:11.703 00:48:45 -- setup/hugepages.sh@217 -- # clear_hp 00:06:11.703 00:48:45 -- setup/hugepages.sh@37 -- # local node hp 00:06:11.703 00:48:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:11.703 00:48:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:11.703 00:48:45 -- setup/hugepages.sh@41 -- # echo 0 00:06:11.703 00:48:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:11.703 00:48:45 -- setup/hugepages.sh@41 -- # echo 0 00:06:11.703 00:48:45 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:11.703 00:48:45 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:11.703 00:06:11.703 real 0m9.574s 00:06:11.703 user 0m2.565s 00:06:11.703 sys 0m7.048s 00:06:11.703 00:48:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.703 00:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:11.703 ************************************ 00:06:11.703 END TEST hugepages 00:06:11.703 ************************************ 00:06:11.703 00:48:45 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:11.703 00:48:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.703 00:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.703 00:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:11.703 ************************************ 00:06:11.703 START TEST driver 00:06:11.703 ************************************ 00:06:11.703 00:48:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:11.963 * Looking for test storage... 00:06:11.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:11.963 00:48:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.963 00:48:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.963 00:48:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.963 00:48:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.963 00:48:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.963 00:48:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.963 00:48:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.963 00:48:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.963 00:48:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.963 00:48:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.963 00:48:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.963 00:48:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.963 00:48:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.963 00:48:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.963 00:48:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.963 00:48:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.963 00:48:46 -- scripts/common.sh@344 -- # : 1 00:06:11.963 00:48:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.963 00:48:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.963 00:48:46 -- scripts/common.sh@364 -- # decimal 1 00:06:11.963 00:48:46 -- scripts/common.sh@352 -- # local d=1 00:06:11.963 00:48:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.963 00:48:46 -- scripts/common.sh@354 -- # echo 1 00:06:11.963 00:48:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.963 00:48:46 -- scripts/common.sh@365 -- # decimal 2 00:06:11.963 00:48:46 -- scripts/common.sh@352 -- # local d=2 00:06:11.963 00:48:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.963 00:48:46 -- scripts/common.sh@354 -- # echo 2 00:06:11.963 00:48:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.963 00:48:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.963 00:48:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.963 00:48:46 -- scripts/common.sh@367 -- # return 0 00:06:11.963 00:48:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.963 00:48:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.963 --rc genhtml_branch_coverage=1 00:06:11.963 --rc genhtml_function_coverage=1 00:06:11.963 --rc genhtml_legend=1 00:06:11.963 --rc geninfo_all_blocks=1 00:06:11.963 --rc geninfo_unexecuted_blocks=1 00:06:11.963 00:06:11.963 ' 00:06:11.963 00:48:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.963 --rc genhtml_branch_coverage=1 00:06:11.963 --rc genhtml_function_coverage=1 00:06:11.963 --rc genhtml_legend=1 00:06:11.963 --rc geninfo_all_blocks=1 00:06:11.963 --rc geninfo_unexecuted_blocks=1 00:06:11.963 00:06:11.963 ' 00:06:11.963 00:48:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.963 --rc genhtml_branch_coverage=1 00:06:11.963 --rc genhtml_function_coverage=1 00:06:11.963 --rc genhtml_legend=1 00:06:11.963 --rc geninfo_all_blocks=1 00:06:11.963 --rc geninfo_unexecuted_blocks=1 00:06:11.963 00:06:11.963 ' 00:06:11.963 00:48:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.963 --rc genhtml_branch_coverage=1 00:06:11.963 --rc genhtml_function_coverage=1 00:06:11.963 --rc genhtml_legend=1 00:06:11.963 --rc geninfo_all_blocks=1 00:06:11.963 --rc geninfo_unexecuted_blocks=1 00:06:11.963 00:06:11.963 ' 00:06:11.963 00:48:46 -- setup/driver.sh@68 -- # setup reset 00:06:11.963 00:48:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:11.963 00:48:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:12.530 00:48:46 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:12.530 00:48:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.530 00:48:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.531 00:48:46 -- common/autotest_common.sh@10 -- # set +x 00:06:12.531 ************************************ 00:06:12.531 START TEST guess_driver 00:06:12.531 ************************************ 00:06:12.531 00:48:46 -- common/autotest_common.sh@1114 -- # guess_driver 00:06:12.531 00:48:46 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:12.531 00:48:46 -- setup/driver.sh@47 -- # local fail=0 00:06:12.531 00:48:46 -- setup/driver.sh@49 -- # pick_driver 00:06:12.531 00:48:46 -- setup/driver.sh@36 -- # vfio 00:06:12.531 00:48:46 -- setup/driver.sh@21 -- # local iommu_grups 00:06:12.531 00:48:46 -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:12.531 00:48:46 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:12.531 00:48:46 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:12.531 00:48:46 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:12.531 00:48:46 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:12.531 00:48:46 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:06:12.531 00:48:46 -- setup/driver.sh@32 -- # return 1 00:06:12.531 00:48:46 -- setup/driver.sh@38 -- # uio 00:06:12.531 00:48:46 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:12.531 00:48:46 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:12.531 00:48:46 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:12.531 00:48:46 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:12.531 00:48:46 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:06:12.531 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:06:12.531 00:48:46 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:12.531 00:48:46 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:12.531 00:48:46 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:12.531 Looking for driver=uio_pci_generic 00:06:12.531 00:48:46 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:12.531 00:48:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:12.531 00:48:46 -- setup/driver.sh@45 -- # setup output config 00:06:12.531 00:48:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.531 00:48:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:13.099 00:48:47 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:13.099 00:48:47 -- setup/driver.sh@58 -- # continue 00:06:13.099 00:48:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:13.099 00:48:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:13.099 00:48:47 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:13.099 00:48:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:15.002 00:48:49 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:15.002 00:48:49 -- setup/driver.sh@65 -- # setup reset 00:06:15.002 00:48:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:15.002 00:48:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:15.570 00:06:15.570 real 0m2.992s 00:06:15.570 user 0m0.426s 00:06:15.570 sys 0m2.569s 00:06:15.570 00:48:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.570 00:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.570 ************************************ 00:06:15.570 END TEST guess_driver 00:06:15.570 ************************************ 00:06:15.570 00:06:15.570 real 0m3.861s 00:06:15.570 user 0m0.761s 00:06:15.570 sys 0m3.131s 00:06:15.570 00:48:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.570 00:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.570 ************************************ 00:06:15.570 END TEST driver 00:06:15.570 ************************************ 00:06:15.570 00:48:49 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:15.570 00:48:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.570 00:48:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.570 00:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.570 ************************************ 00:06:15.570 START TEST devices 00:06:15.570 ************************************ 00:06:15.570 00:48:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:15.830 * Looking for test storage... 00:06:15.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:15.830 00:48:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.830 00:48:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.830 00:48:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.830 00:48:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.830 00:48:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.830 00:48:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.830 00:48:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.830 00:48:50 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.830 00:48:50 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.830 00:48:50 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.830 00:48:50 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.830 00:48:50 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.830 00:48:50 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.830 00:48:50 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.830 00:48:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:15.830 00:48:50 -- scripts/common.sh@343 -- # case "$op" in 00:06:15.830 00:48:50 -- scripts/common.sh@344 -- # : 1 00:06:15.830 00:48:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:15.830 00:48:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.830 00:48:50 -- scripts/common.sh@364 -- # decimal 1 00:06:15.830 00:48:50 -- scripts/common.sh@352 -- # local d=1 00:06:15.830 00:48:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.830 00:48:50 -- scripts/common.sh@354 -- # echo 1 00:06:15.830 00:48:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:15.830 00:48:50 -- scripts/common.sh@365 -- # decimal 2 00:06:15.830 00:48:50 -- scripts/common.sh@352 -- # local d=2 00:06:15.831 00:48:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.831 00:48:50 -- scripts/common.sh@354 -- # echo 2 00:06:15.831 00:48:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:15.831 00:48:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:15.831 00:48:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:15.831 00:48:50 -- scripts/common.sh@367 -- # return 0 00:06:15.831 00:48:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.831 00:48:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.831 --rc genhtml_branch_coverage=1 00:06:15.831 --rc genhtml_function_coverage=1 00:06:15.831 --rc genhtml_legend=1 00:06:15.831 --rc geninfo_all_blocks=1 00:06:15.831 --rc geninfo_unexecuted_blocks=1 00:06:15.831 00:06:15.831 ' 00:06:15.831 00:48:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.831 --rc genhtml_branch_coverage=1 00:06:15.831 --rc genhtml_function_coverage=1 00:06:15.831 --rc genhtml_legend=1 00:06:15.831 --rc geninfo_all_blocks=1 00:06:15.831 --rc geninfo_unexecuted_blocks=1 00:06:15.831 00:06:15.831 ' 00:06:15.831 00:48:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.831 --rc genhtml_branch_coverage=1 00:06:15.831 --rc genhtml_function_coverage=1 00:06:15.831 --rc genhtml_legend=1 00:06:15.831 --rc geninfo_all_blocks=1 00:06:15.831 --rc geninfo_unexecuted_blocks=1 00:06:15.831 00:06:15.831 ' 00:06:15.831 00:48:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.831 --rc genhtml_branch_coverage=1 00:06:15.831 --rc genhtml_function_coverage=1 00:06:15.831 --rc genhtml_legend=1 00:06:15.831 --rc geninfo_all_blocks=1 00:06:15.831 --rc geninfo_unexecuted_blocks=1 00:06:15.831 00:06:15.831 ' 00:06:15.831 00:48:50 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:15.831 00:48:50 -- setup/devices.sh@192 -- # setup reset 00:06:15.831 00:48:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:15.831 00:48:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:16.399 00:48:50 -- setup/devices.sh@194 -- # get_zoned_devs 00:06:16.399 00:48:50 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:06:16.399 00:48:50 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:06:16.399 00:48:50 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:06:16.399 00:48:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:06:16.399 00:48:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:06:16.399 00:48:50 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:06:16.399 00:48:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:16.399 00:48:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:06:16.399 00:48:50 -- setup/devices.sh@196 -- # blocks=() 00:06:16.399 00:48:50 -- setup/devices.sh@196 -- # declare -a blocks 00:06:16.399 00:48:50 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:16.399 00:48:50 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:16.399 00:48:50 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:16.399 00:48:50 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:16.399 00:48:50 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:16.399 00:48:50 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:16.399 00:48:50 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:06:16.399 00:48:50 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:06:16.399 00:48:50 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:16.399 00:48:50 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:06:16.399 00:48:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:16.399 No valid GPT data, bailing 00:06:16.399 00:48:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:16.399 00:48:50 -- scripts/common.sh@393 -- # pt= 00:06:16.399 00:48:50 -- scripts/common.sh@394 -- # return 1 00:06:16.399 00:48:50 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:16.399 00:48:50 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:16.399 00:48:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:16.399 00:48:50 -- setup/common.sh@80 -- # echo 5368709120 00:06:16.399 00:48:50 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:16.399 00:48:50 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:16.399 00:48:50 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:06:16.399 00:48:50 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:16.399 00:48:50 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:16.399 00:48:50 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:16.399 00:48:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.399 00:48:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.399 00:48:50 -- common/autotest_common.sh@10 -- # set +x 00:06:16.399 ************************************ 00:06:16.399 START TEST nvme_mount 00:06:16.399 ************************************ 00:06:16.399 00:48:50 -- common/autotest_common.sh@1114 -- # nvme_mount 00:06:16.399 00:48:50 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:16.399 00:48:50 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:16.399 00:48:50 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.399 00:48:50 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:16.399 00:48:50 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:16.399 00:48:50 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:16.399 00:48:50 -- setup/common.sh@40 -- # local part_no=1 00:06:16.399 00:48:50 -- setup/common.sh@41 -- # local size=1073741824 00:06:16.399 00:48:50 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:16.399 00:48:50 -- setup/common.sh@44 -- # parts=() 00:06:16.399 00:48:50 -- setup/common.sh@44 -- # local parts 00:06:16.399 00:48:50 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:16.399 00:48:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:16.400 00:48:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:16.400 00:48:50 -- setup/common.sh@46 -- # (( part++ )) 00:06:16.400 00:48:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:16.400 00:48:50 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:16.400 00:48:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:16.400 00:48:50 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:17.778 Creating new GPT entries in memory. 00:06:17.778 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:17.778 other utilities. 00:06:17.778 00:48:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:17.778 00:48:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:17.778 00:48:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:17.778 00:48:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:17.778 00:48:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:18.718 Creating new GPT entries in memory. 00:06:18.718 The operation has completed successfully. 00:06:18.718 00:48:52 -- setup/common.sh@57 -- # (( part++ )) 00:06:18.718 00:48:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:18.718 00:48:52 -- setup/common.sh@62 -- # wait 109281 00:06:18.718 00:48:52 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.718 00:48:52 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:18.718 00:48:52 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.718 00:48:52 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:18.718 00:48:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:18.718 00:48:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.718 00:48:52 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:18.718 00:48:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:18.718 00:48:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:18.718 00:48:52 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.718 00:48:52 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:18.718 00:48:52 -- setup/devices.sh@53 -- # local found=0 00:06:18.718 00:48:52 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:18.718 00:48:52 -- setup/devices.sh@56 -- # : 00:06:18.718 00:48:52 -- setup/devices.sh@59 -- # local pci status 00:06:18.718 00:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.718 00:48:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:18.718 00:48:52 -- setup/devices.sh@47 -- # setup output config 00:06:18.718 00:48:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.718 00:48:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:18.978 00:48:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:18.978 00:48:53 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:18.978 00:48:53 -- setup/devices.sh@63 -- # found=1 00:06:18.978 00:48:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.978 00:48:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:18.978 00:48:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.978 00:48:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:18.978 00:48:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.959 00:48:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:20.959 00:48:55 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:20.959 00:48:55 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.959 00:48:55 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:20.959 00:48:55 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:20.959 00:48:55 -- setup/devices.sh@110 -- # cleanup_nvme 00:06:20.959 00:48:55 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.959 00:48:55 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.959 00:48:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.959 00:48:55 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:20.959 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:20.959 00:48:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:20.959 00:48:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:20.959 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:20.959 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:20.959 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:20.959 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:20.959 00:48:55 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:20.959 00:48:55 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:20.959 00:48:55 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.959 00:48:55 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:20.959 00:48:55 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:20.959 00:48:55 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.959 00:48:55 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:20.959 00:48:55 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:20.959 00:48:55 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:20.959 00:48:55 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.959 00:48:55 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:20.959 00:48:55 -- setup/devices.sh@53 -- # local found=0 00:06:20.959 00:48:55 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:20.959 00:48:55 -- setup/devices.sh@56 -- # : 00:06:20.959 00:48:55 -- setup/devices.sh@59 -- # local pci status 00:06:20.959 00:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.959 00:48:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:20.959 00:48:55 -- setup/devices.sh@47 -- # setup output config 00:06:20.959 00:48:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:20.959 00:48:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:21.218 00:48:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:21.219 00:48:55 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:21.219 00:48:55 -- setup/devices.sh@63 -- # found=1 00:06:21.219 00:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.219 00:48:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:21.219 00:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.219 00:48:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:21.219 00:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.126 00:48:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:23.126 00:48:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:23.126 00:48:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:23.126 00:48:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:23.126 00:48:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:23.126 00:48:57 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:23.126 00:48:57 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:06:23.126 00:48:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:23.127 00:48:57 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:23.127 00:48:57 -- setup/devices.sh@50 -- # local mount_point= 00:06:23.127 00:48:57 -- setup/devices.sh@51 -- # local test_file= 00:06:23.127 00:48:57 -- setup/devices.sh@53 -- # local found=0 00:06:23.127 00:48:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:23.127 00:48:57 -- setup/devices.sh@59 -- # local pci status 00:06:23.127 00:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.127 00:48:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:23.127 00:48:57 -- setup/devices.sh@47 -- # setup output config 00:06:23.127 00:48:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.127 00:48:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:23.388 00:48:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:23.388 00:48:57 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:23.388 00:48:57 -- setup/devices.sh@63 -- # found=1 00:06:23.388 00:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.388 00:48:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:23.388 00:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.647 00:48:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:23.647 00:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.566 00:48:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:25.566 00:48:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:25.566 00:48:59 -- setup/devices.sh@68 -- # return 0 00:06:25.566 00:48:59 -- setup/devices.sh@128 -- # cleanup_nvme 00:06:25.566 00:48:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.566 00:48:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:25.566 00:48:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:25.566 00:48:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:25.566 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:25.566 00:06:25.566 real 0m8.857s 00:06:25.566 user 0m0.724s 00:06:25.566 sys 0m6.147s 00:06:25.566 00:48:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.566 00:48:59 -- common/autotest_common.sh@10 -- # set +x 00:06:25.566 ************************************ 00:06:25.566 END TEST nvme_mount 00:06:25.566 ************************************ 00:06:25.566 00:48:59 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:25.566 00:48:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.566 00:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.566 00:48:59 -- common/autotest_common.sh@10 -- # set +x 00:06:25.566 ************************************ 00:06:25.566 START TEST dm_mount 00:06:25.566 ************************************ 00:06:25.566 00:48:59 -- common/autotest_common.sh@1114 -- # dm_mount 00:06:25.566 00:48:59 -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:25.566 00:48:59 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:25.567 00:48:59 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:25.567 00:48:59 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:25.567 00:48:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:25.567 00:48:59 -- setup/common.sh@40 -- # local part_no=2 00:06:25.567 00:48:59 -- setup/common.sh@41 -- # local size=1073741824 00:06:25.567 00:48:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:25.567 00:48:59 -- setup/common.sh@44 -- # parts=() 00:06:25.567 00:48:59 -- setup/common.sh@44 -- # local parts 00:06:25.567 00:48:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:25.567 00:48:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.567 00:48:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:25.567 00:48:59 -- setup/common.sh@46 -- # (( part++ )) 00:06:25.567 00:48:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.567 00:48:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:25.567 00:48:59 -- setup/common.sh@46 -- # (( part++ )) 00:06:25.567 00:48:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.567 00:48:59 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:25.567 00:48:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:25.567 00:48:59 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:26.505 Creating new GPT entries in memory. 00:06:26.505 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:26.505 other utilities. 00:06:26.505 00:49:00 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:26.505 00:49:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:26.505 00:49:00 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:26.505 00:49:00 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:26.505 00:49:00 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:27.442 Creating new GPT entries in memory. 00:06:27.442 The operation has completed successfully. 00:06:27.442 00:49:01 -- setup/common.sh@57 -- # (( part++ )) 00:06:27.442 00:49:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:27.442 00:49:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:27.442 00:49:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:27.442 00:49:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:28.821 The operation has completed successfully. 00:06:28.821 00:49:02 -- setup/common.sh@57 -- # (( part++ )) 00:06:28.821 00:49:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:28.821 00:49:02 -- setup/common.sh@62 -- # wait 109800 00:06:28.821 00:49:02 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:28.821 00:49:02 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:28.821 00:49:02 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:28.821 00:49:02 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:28.821 00:49:02 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:28.821 00:49:02 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:28.821 00:49:02 -- setup/devices.sh@161 -- # break 00:06:28.821 00:49:02 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:28.821 00:49:02 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:28.821 00:49:02 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:28.821 00:49:02 -- setup/devices.sh@166 -- # dm=dm-0 00:06:28.821 00:49:02 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:28.821 00:49:02 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:28.821 00:49:02 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:28.821 00:49:02 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:28.821 00:49:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:28.821 00:49:02 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:28.821 00:49:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:28.821 00:49:02 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:28.821 00:49:02 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:28.821 00:49:02 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:28.821 00:49:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:28.821 00:49:02 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:28.821 00:49:02 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:28.821 00:49:02 -- setup/devices.sh@53 -- # local found=0 00:06:28.821 00:49:02 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:28.821 00:49:02 -- setup/devices.sh@56 -- # : 00:06:28.821 00:49:02 -- setup/devices.sh@59 -- # local pci status 00:06:28.821 00:49:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.822 00:49:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:28.822 00:49:02 -- setup/devices.sh@47 -- # setup output config 00:06:28.822 00:49:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.822 00:49:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:29.081 00:49:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:29.081 00:49:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:29.081 00:49:03 -- setup/devices.sh@63 -- # found=1 00:06:29.081 00:49:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.081 00:49:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:29.081 00:49:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.081 00:49:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:29.081 00:49:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.987 00:49:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:30.987 00:49:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:30.987 00:49:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:30.987 00:49:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:30.987 00:49:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:30.987 00:49:05 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:30.987 00:49:05 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:30.987 00:49:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:30.987 00:49:05 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:30.987 00:49:05 -- setup/devices.sh@50 -- # local mount_point= 00:06:30.987 00:49:05 -- setup/devices.sh@51 -- # local test_file= 00:06:30.987 00:49:05 -- setup/devices.sh@53 -- # local found=0 00:06:30.987 00:49:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:30.987 00:49:05 -- setup/devices.sh@59 -- # local pci status 00:06:30.987 00:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.987 00:49:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:30.987 00:49:05 -- setup/devices.sh@47 -- # setup output config 00:06:30.987 00:49:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.987 00:49:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:31.247 00:49:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:31.247 00:49:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:31.247 00:49:05 -- setup/devices.sh@63 -- # found=1 00:06:31.247 00:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:31.247 00:49:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:31.247 00:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:31.247 00:49:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:31.247 00:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.207 00:49:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:33.207 00:49:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:33.207 00:49:07 -- setup/devices.sh@68 -- # return 0 00:06:33.207 00:49:07 -- setup/devices.sh@187 -- # cleanup_dm 00:06:33.207 00:49:07 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.207 00:49:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:33.207 00:49:07 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:33.207 00:49:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:33.207 00:49:07 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:33.207 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:33.207 00:49:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:33.207 00:49:07 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:33.207 00:06:33.207 real 0m7.749s 00:06:33.207 user 0m0.459s 00:06:33.207 sys 0m4.140s 00:06:33.207 00:49:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.207 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.207 ************************************ 00:06:33.207 END TEST dm_mount 00:06:33.207 ************************************ 00:06:33.207 00:49:07 -- setup/devices.sh@1 -- # cleanup 00:06:33.207 00:49:07 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:33.207 00:49:07 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:33.207 00:49:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:33.207 00:49:07 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:33.207 00:49:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:33.207 00:49:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:33.207 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:33.207 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:33.207 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:33.207 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:33.207 00:49:07 -- setup/devices.sh@12 -- # cleanup_dm 00:06:33.207 00:49:07 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.207 00:49:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:33.208 00:49:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:33.208 00:49:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:33.208 00:49:07 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:33.208 00:49:07 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:33.208 00:06:33.208 real 0m17.625s 00:06:33.208 user 0m1.718s 00:06:33.208 sys 0m10.782s 00:06:33.208 00:49:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.208 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.208 ************************************ 00:06:33.208 END TEST devices 00:06:33.208 ************************************ 00:06:33.208 00:06:33.208 real 0m38.722s 00:06:33.208 user 0m6.998s 00:06:33.208 sys 0m26.858s 00:06:33.208 00:49:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.208 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.208 ************************************ 00:06:33.208 END TEST setup.sh 00:06:33.208 ************************************ 00:06:33.467 00:49:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:33.467 Hugepages 00:06:33.467 node hugesize free / total 00:06:33.467 node0 1048576kB 0 / 0 00:06:33.467 node0 2048kB 2048 / 2048 00:06:33.467 00:06:33.467 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:33.726 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:33.726 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:33.726 00:49:08 -- spdk/autotest.sh@128 -- # uname -s 00:06:33.726 00:49:08 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:06:33.726 00:49:08 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:06:33.726 00:49:08 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:34.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:34.553 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:36.458 00:49:10 -- common/autotest_common.sh@1527 -- # sleep 1 00:06:37.396 00:49:11 -- common/autotest_common.sh@1528 -- # bdfs=() 00:06:37.396 00:49:11 -- common/autotest_common.sh@1528 -- # local bdfs 00:06:37.396 00:49:11 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:06:37.396 00:49:11 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:06:37.396 00:49:11 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:37.396 00:49:11 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:37.396 00:49:11 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:37.396 00:49:11 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:37.396 00:49:11 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:37.396 00:49:11 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:06:37.396 00:49:11 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:06:37.396 00:49:11 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:37.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:37.654 Waiting for block devices as requested 00:06:37.654 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:06:37.913 00:49:12 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:37.913 00:49:12 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:06:37.913 00:49:12 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:06:37.913 00:49:12 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:06:37.913 00:49:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:37.913 00:49:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:37.913 00:49:12 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:37.913 00:49:12 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:37.913 00:49:12 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:37.913 00:49:12 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:06:37.913 00:49:12 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:37.913 00:49:12 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:37.913 00:49:12 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:37.913 00:49:12 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:37.913 00:49:12 -- common/autotest_common.sh@1552 -- # continue 00:06:37.913 00:49:12 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:06:37.913 00:49:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.913 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:37.913 00:49:12 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:06:37.913 00:49:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:37.913 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:37.913 00:49:12 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:38.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:38.481 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.387 00:49:14 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:06:40.387 00:49:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:40.387 00:49:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.387 00:49:14 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:06:40.387 00:49:14 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:06:40.387 00:49:14 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:06:40.387 00:49:14 -- common/autotest_common.sh@1572 -- # bdfs=() 00:06:40.387 00:49:14 -- common/autotest_common.sh@1572 -- # local bdfs 00:06:40.387 00:49:14 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:06:40.387 00:49:14 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:40.387 00:49:14 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:40.387 00:49:14 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:40.387 00:49:14 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:40.387 00:49:14 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:40.387 00:49:14 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:06:40.387 00:49:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:06:40.387 00:49:14 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:40.387 00:49:14 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:40.387 00:49:14 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:40.387 00:49:14 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:40.387 00:49:14 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:06:40.387 00:49:14 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:06:40.387 00:49:14 -- common/autotest_common.sh@1588 -- # return 0 00:06:40.387 00:49:14 -- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']' 00:06:40.387 00:49:14 -- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:40.387 00:49:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.387 00:49:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.387 00:49:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.387 ************************************ 00:06:40.387 START TEST unittest 00:06:40.387 ************************************ 00:06:40.387 00:49:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:40.387 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:40.387 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:40.648 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:40.648 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:40.648 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:40.648 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:40.648 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:40.648 ++ rpc_py=rpc_cmd 00:06:40.648 ++ set -e 00:06:40.648 ++ shopt -s nullglob 00:06:40.648 ++ shopt -s extglob 00:06:40.648 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:40.648 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:40.648 +++ CONFIG_WPDK_DIR= 00:06:40.648 +++ CONFIG_ASAN=y 00:06:40.648 +++ CONFIG_VBDEV_COMPRESS=n 00:06:40.648 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:40.648 +++ CONFIG_USDT=n 00:06:40.648 +++ CONFIG_CUSTOMOCF=n 00:06:40.648 +++ CONFIG_PREFIX=/usr/local 00:06:40.648 +++ CONFIG_RBD=n 00:06:40.648 +++ CONFIG_LIBDIR= 00:06:40.648 +++ CONFIG_IDXD=y 00:06:40.648 +++ CONFIG_NVME_CUSE=y 00:06:40.648 +++ CONFIG_SMA=n 00:06:40.648 +++ CONFIG_VTUNE=n 00:06:40.648 +++ CONFIG_TSAN=n 00:06:40.648 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:40.648 +++ CONFIG_VFIO_USER_DIR= 00:06:40.648 +++ CONFIG_PGO_CAPTURE=n 00:06:40.648 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:40.648 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:40.648 +++ CONFIG_LTO=n 00:06:40.648 +++ CONFIG_ISCSI_INITIATOR=y 00:06:40.648 +++ CONFIG_CET=n 00:06:40.648 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:40.648 +++ CONFIG_OCF_PATH= 00:06:40.648 +++ CONFIG_RDMA_SET_TOS=y 00:06:40.648 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:40.648 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:40.648 +++ CONFIG_UBLK=n 00:06:40.648 +++ CONFIG_ISAL_CRYPTO=y 00:06:40.648 +++ CONFIG_OPENSSL_PATH= 00:06:40.648 +++ CONFIG_OCF=n 00:06:40.648 +++ CONFIG_FUSE=n 00:06:40.648 +++ CONFIG_VTUNE_DIR= 00:06:40.648 +++ CONFIG_FUZZER_LIB= 00:06:40.648 +++ CONFIG_FUZZER=n 00:06:40.648 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:40.648 +++ CONFIG_CRYPTO=n 00:06:40.648 +++ CONFIG_PGO_USE=n 00:06:40.648 +++ CONFIG_VHOST=y 00:06:40.648 +++ CONFIG_DAOS=n 00:06:40.648 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:40.648 +++ CONFIG_DAOS_DIR= 00:06:40.648 +++ CONFIG_UNIT_TESTS=y 00:06:40.648 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:40.648 +++ CONFIG_VIRTIO=y 00:06:40.648 +++ CONFIG_COVERAGE=y 00:06:40.648 +++ CONFIG_RDMA=y 00:06:40.648 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:40.648 +++ CONFIG_URING_PATH= 00:06:40.648 +++ CONFIG_XNVME=n 00:06:40.648 +++ CONFIG_VFIO_USER=n 00:06:40.648 +++ CONFIG_ARCH=native 00:06:40.648 +++ CONFIG_URING_ZNS=n 00:06:40.648 +++ CONFIG_WERROR=y 00:06:40.648 +++ CONFIG_HAVE_LIBBSD=n 00:06:40.648 +++ CONFIG_UBSAN=y 00:06:40.648 +++ CONFIG_IPSEC_MB_DIR= 00:06:40.648 +++ CONFIG_GOLANG=n 00:06:40.648 +++ CONFIG_ISAL=y 00:06:40.648 +++ CONFIG_IDXD_KERNEL=n 00:06:40.648 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:40.648 +++ CONFIG_RDMA_PROV=verbs 00:06:40.648 +++ CONFIG_APPS=y 00:06:40.648 +++ CONFIG_SHARED=n 00:06:40.648 +++ CONFIG_FC_PATH= 00:06:40.648 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:40.648 +++ CONFIG_FC=n 00:06:40.648 +++ CONFIG_AVAHI=n 00:06:40.648 +++ CONFIG_FIO_PLUGIN=y 00:06:40.648 +++ CONFIG_RAID5F=y 00:06:40.648 +++ CONFIG_EXAMPLES=y 00:06:40.648 +++ CONFIG_TESTS=y 00:06:40.648 +++ CONFIG_CRYPTO_MLX5=n 00:06:40.648 +++ CONFIG_MAX_LCORES= 00:06:40.648 +++ CONFIG_IPSEC_MB=n 00:06:40.648 +++ CONFIG_DEBUG=y 00:06:40.648 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:40.648 +++ CONFIG_CROSS_PREFIX= 00:06:40.648 +++ CONFIG_URING=n 00:06:40.648 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:40.648 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:40.648 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:40.648 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:40.648 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:40.648 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:40.648 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:40.648 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:40.648 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:40.648 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:40.648 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:40.648 +++ VHOST_APP=("$_app_dir/vhost") 00:06:40.648 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:40.648 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:40.648 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:40.648 +++ [[ #ifndef SPDK_CONFIG_H 00:06:40.648 #define SPDK_CONFIG_H 00:06:40.648 #define SPDK_CONFIG_APPS 1 00:06:40.648 #define SPDK_CONFIG_ARCH native 00:06:40.648 #define SPDK_CONFIG_ASAN 1 00:06:40.648 #undef SPDK_CONFIG_AVAHI 00:06:40.648 #undef SPDK_CONFIG_CET 00:06:40.648 #define SPDK_CONFIG_COVERAGE 1 00:06:40.648 #define SPDK_CONFIG_CROSS_PREFIX 00:06:40.648 #undef SPDK_CONFIG_CRYPTO 00:06:40.648 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:40.648 #undef SPDK_CONFIG_CUSTOMOCF 00:06:40.648 #undef SPDK_CONFIG_DAOS 00:06:40.648 #define SPDK_CONFIG_DAOS_DIR 00:06:40.648 #define SPDK_CONFIG_DEBUG 1 00:06:40.648 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:40.648 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:06:40.648 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:06:40.648 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:06:40.648 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:40.648 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:40.648 #define SPDK_CONFIG_EXAMPLES 1 00:06:40.648 #undef SPDK_CONFIG_FC 00:06:40.648 #define SPDK_CONFIG_FC_PATH 00:06:40.648 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:40.648 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:40.648 #undef SPDK_CONFIG_FUSE 00:06:40.648 #undef SPDK_CONFIG_FUZZER 00:06:40.648 #define SPDK_CONFIG_FUZZER_LIB 00:06:40.648 #undef SPDK_CONFIG_GOLANG 00:06:40.648 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:40.648 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:40.648 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:40.648 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:40.648 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:40.648 #define SPDK_CONFIG_IDXD 1 00:06:40.648 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:40.648 #undef SPDK_CONFIG_IPSEC_MB 00:06:40.648 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:40.648 #define SPDK_CONFIG_ISAL 1 00:06:40.648 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:40.648 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:40.648 #define SPDK_CONFIG_LIBDIR 00:06:40.648 #undef SPDK_CONFIG_LTO 00:06:40.648 #define SPDK_CONFIG_MAX_LCORES 00:06:40.648 #define SPDK_CONFIG_NVME_CUSE 1 00:06:40.648 #undef SPDK_CONFIG_OCF 00:06:40.648 #define SPDK_CONFIG_OCF_PATH 00:06:40.648 #define SPDK_CONFIG_OPENSSL_PATH 00:06:40.648 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:40.648 #undef SPDK_CONFIG_PGO_USE 00:06:40.648 #define SPDK_CONFIG_PREFIX /usr/local 00:06:40.648 #define SPDK_CONFIG_RAID5F 1 00:06:40.648 #undef SPDK_CONFIG_RBD 00:06:40.648 #define SPDK_CONFIG_RDMA 1 00:06:40.648 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:40.648 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:40.648 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:40.648 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:40.648 #undef SPDK_CONFIG_SHARED 00:06:40.648 #undef SPDK_CONFIG_SMA 00:06:40.648 #define SPDK_CONFIG_TESTS 1 00:06:40.648 #undef SPDK_CONFIG_TSAN 00:06:40.648 #undef SPDK_CONFIG_UBLK 00:06:40.648 #define SPDK_CONFIG_UBSAN 1 00:06:40.648 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:40.648 #undef SPDK_CONFIG_URING 00:06:40.648 #define SPDK_CONFIG_URING_PATH 00:06:40.648 #undef SPDK_CONFIG_URING_ZNS 00:06:40.648 #undef SPDK_CONFIG_USDT 00:06:40.648 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:40.648 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:40.648 #undef SPDK_CONFIG_VFIO_USER 00:06:40.649 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:40.649 #define SPDK_CONFIG_VHOST 1 00:06:40.649 #define SPDK_CONFIG_VIRTIO 1 00:06:40.649 #undef SPDK_CONFIG_VTUNE 00:06:40.649 #define SPDK_CONFIG_VTUNE_DIR 00:06:40.649 #define SPDK_CONFIG_WERROR 1 00:06:40.649 #define SPDK_CONFIG_WPDK_DIR 00:06:40.649 #undef SPDK_CONFIG_XNVME 00:06:40.649 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:40.649 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:40.649 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:40.649 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:40.649 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.649 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.649 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:40.649 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:40.649 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:40.649 ++++ export PATH 00:06:40.649 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:40.649 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:40.649 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:40.649 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:40.649 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:40.649 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:40.649 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:40.649 +++ TEST_TAG=N/A 00:06:40.649 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:40.649 ++ : 1 00:06:40.649 ++ export RUN_NIGHTLY 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_RUN_VALGRIND 00:06:40.649 ++ : 1 00:06:40.649 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:40.649 ++ : 1 00:06:40.649 ++ export SPDK_TEST_UNITTEST 00:06:40.649 ++ : 00:06:40.649 ++ export SPDK_TEST_AUTOBUILD 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_RELEASE_BUILD 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_ISAL 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_ISCSI 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:40.649 ++ : 1 00:06:40.649 ++ export SPDK_TEST_NVME 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_NVME_PMR 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_NVME_BP 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_NVME_CLI 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_NVME_CUSE 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_NVME_FDP 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_NVMF 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_VFIOUSER 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_FUZZER 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_FUZZER_SHORT 00:06:40.649 ++ : rdma 00:06:40.649 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_RBD 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_VHOST 00:06:40.649 ++ : 1 00:06:40.649 ++ export SPDK_TEST_BLOCKDEV 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_IOAT 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_BLOBFS 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_VHOST_INIT 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_LVOL 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:40.649 ++ : 1 00:06:40.649 ++ export SPDK_RUN_ASAN 00:06:40.649 ++ : 1 00:06:40.649 ++ export SPDK_RUN_UBSAN 00:06:40.649 ++ : /home/vagrant/spdk_repo/dpdk/build 00:06:40.649 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_RUN_NON_ROOT 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_CRYPTO 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_FTL 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_OCF 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_VMD 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_OPAL 00:06:40.649 ++ : v22.11.4 00:06:40.649 ++ export SPDK_TEST_NATIVE_DPDK 00:06:40.649 ++ : true 00:06:40.649 ++ export SPDK_AUTOTEST_X 00:06:40.649 ++ : 1 00:06:40.649 ++ export SPDK_TEST_RAID5 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_URING 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_USDT 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_USE_IGB_UIO 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_SCHEDULER 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_SCANBUILD 00:06:40.649 ++ : 00:06:40.649 ++ export SPDK_TEST_NVMF_NICS 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_SMA 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_DAOS 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_XNVME 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_ACCEL_DSA 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_ACCEL_IAA 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_ACCEL_IOAT 00:06:40.649 ++ : 00:06:40.649 ++ export SPDK_TEST_FUZZER_TARGET 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_TEST_NVMF_MDNS 00:06:40.649 ++ : 0 00:06:40.649 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:40.649 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:40.649 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:40.649 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:40.649 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:40.649 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:40.649 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:40.649 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:40.649 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:40.649 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:40.649 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:40.649 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:40.649 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:40.649 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:40.649 ++ PYTHONDONTWRITEBYTECODE=1 00:06:40.649 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:40.649 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:40.649 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:40.649 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:40.649 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:40.649 ++ rm -rf /var/tmp/asan_suppression_file 00:06:40.649 ++ cat 00:06:40.649 ++ echo leak:libfuse3.so 00:06:40.649 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:40.649 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:40.649 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:40.649 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:40.649 ++ '[' -z /var/spdk/dependencies ']' 00:06:40.649 ++ export DEPENDENCY_DIR 00:06:40.649 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:40.649 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:40.649 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:40.649 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:40.649 ++ export QEMU_BIN= 00:06:40.649 ++ QEMU_BIN= 00:06:40.649 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:40.649 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:40.649 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:40.649 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:40.649 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:40.649 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:40.649 ++ _LCOV_MAIN=0 00:06:40.649 ++ _LCOV_LLVM=1 00:06:40.649 ++ _LCOV= 00:06:40.649 ++ [[ '' == *clang* ]] 00:06:40.649 ++ [[ 0 -eq 1 ]] 00:06:40.650 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:40.650 ++ _lcov_opt[_LCOV_MAIN]= 00:06:40.650 ++ lcov_opt= 00:06:40.650 ++ '[' 0 -eq 0 ']' 00:06:40.650 ++ export valgrind= 00:06:40.650 ++ valgrind= 00:06:40.650 +++ uname -s 00:06:40.650 ++ '[' Linux = Linux ']' 00:06:40.650 ++ HUGEMEM=4096 00:06:40.650 ++ export CLEAR_HUGE=yes 00:06:40.650 ++ CLEAR_HUGE=yes 00:06:40.650 ++ [[ 0 -eq 1 ]] 00:06:40.650 ++ [[ 0 -eq 1 ]] 00:06:40.650 ++ MAKE=make 00:06:40.650 +++ nproc 00:06:40.650 ++ MAKEFLAGS=-j10 00:06:40.650 ++ export HUGEMEM=4096 00:06:40.650 ++ HUGEMEM=4096 00:06:40.650 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:40.650 ++ NO_HUGE=() 00:06:40.650 ++ TEST_MODE= 00:06:40.650 ++ [[ -z '' ]] 00:06:40.650 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:40.650 ++ exec 00:06:40.650 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:40.650 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:40.650 ++ set_test_storage 2147483648 00:06:40.650 ++ [[ -v testdir ]] 00:06:40.650 ++ local requested_size=2147483648 00:06:40.650 ++ local mount target_dir 00:06:40.650 ++ local -A mounts fss sizes avails uses 00:06:40.650 ++ local source fs size avail mount use 00:06:40.650 ++ local storage_fallback storage_candidates 00:06:40.650 +++ mktemp -udt spdk.XXXXXX 00:06:40.650 ++ storage_fallback=/tmp/spdk.BpPumY 00:06:40.650 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:40.650 ++ [[ -n '' ]] 00:06:40.650 ++ [[ -n '' ]] 00:06:40.650 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.BpPumY/tests/unit /tmp/spdk.BpPumY 00:06:40.650 ++ requested_size=2214592512 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 +++ df -T 00:06:40.650 +++ grep -v Filesystem 00:06:40.650 ++ mounts["$mount"]=tmpfs 00:06:40.650 ++ fss["$mount"]=tmpfs 00:06:40.650 ++ avails["$mount"]=1252601856 00:06:40.650 ++ sizes["$mount"]=1253683200 00:06:40.650 ++ uses["$mount"]=1081344 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 ++ mounts["$mount"]=/dev/vda1 00:06:40.650 ++ fss["$mount"]=ext4 00:06:40.650 ++ avails["$mount"]=9444626432 00:06:40.650 ++ sizes["$mount"]=20616794112 00:06:40.650 ++ uses["$mount"]=11155390464 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 ++ mounts["$mount"]=tmpfs 00:06:40.650 ++ fss["$mount"]=tmpfs 00:06:40.650 ++ avails["$mount"]=6268403712 00:06:40.650 ++ sizes["$mount"]=6268403712 00:06:40.650 ++ uses["$mount"]=0 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 ++ mounts["$mount"]=tmpfs 00:06:40.650 ++ fss["$mount"]=tmpfs 00:06:40.650 ++ avails["$mount"]=5242880 00:06:40.650 ++ sizes["$mount"]=5242880 00:06:40.650 ++ uses["$mount"]=0 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 ++ mounts["$mount"]=/dev/vda15 00:06:40.650 ++ fss["$mount"]=vfat 00:06:40.650 ++ avails["$mount"]=103061504 00:06:40.650 ++ sizes["$mount"]=109395968 00:06:40.650 ++ uses["$mount"]=6334464 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 ++ mounts["$mount"]=tmpfs 00:06:40.650 ++ fss["$mount"]=tmpfs 00:06:40.650 ++ avails["$mount"]=1253675008 00:06:40.650 ++ sizes["$mount"]=1253679104 00:06:40.650 ++ uses["$mount"]=4096 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:06:40.650 ++ fss["$mount"]=fuse.sshfs 00:06:40.650 ++ avails["$mount"]=94995218432 00:06:40.650 ++ sizes["$mount"]=105088212992 00:06:40.650 ++ uses["$mount"]=4707561472 00:06:40.650 ++ read -r source fs size use avail _ mount 00:06:40.650 ++ printf '* Looking for test storage...\n' 00:06:40.650 * Looking for test storage... 00:06:40.650 ++ local target_space new_size 00:06:40.650 ++ for target_dir in "${storage_candidates[@]}" 00:06:40.650 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:40.650 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:40.650 ++ mount=/ 00:06:40.650 ++ target_space=9444626432 00:06:40.650 ++ (( target_space == 0 || target_space < requested_size )) 00:06:40.650 ++ (( target_space >= requested_size )) 00:06:40.650 ++ [[ ext4 == tmpfs ]] 00:06:40.650 ++ [[ ext4 == ramfs ]] 00:06:40.650 ++ [[ / == / ]] 00:06:40.650 ++ new_size=13369982976 00:06:40.650 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:40.650 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:40.650 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:40.650 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:40.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:40.650 ++ return 0 00:06:40.650 ++ set -o errtrace 00:06:40.650 ++ shopt -s extdebug 00:06:40.650 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:40.650 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:40.650 00:49:14 -- common/autotest_common.sh@1682 -- # true 00:06:40.650 00:49:14 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:06:40.650 00:49:14 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:40.650 00:49:14 -- common/autotest_common.sh@29 -- # exec 00:06:40.650 00:49:14 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:40.650 00:49:14 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:40.650 00:49:14 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:40.650 00:49:14 -- common/autotest_common.sh@18 -- # set -x 00:06:40.650 00:49:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:40.650 00:49:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:40.650 00:49:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.650 00:49:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.650 00:49:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.650 00:49:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.650 00:49:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.650 00:49:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.650 00:49:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.650 00:49:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.650 00:49:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.650 00:49:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.650 00:49:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.650 00:49:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.650 00:49:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.650 00:49:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.650 00:49:15 -- scripts/common.sh@344 -- # : 1 00:06:40.650 00:49:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.650 00:49:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.650 00:49:15 -- scripts/common.sh@364 -- # decimal 1 00:06:40.650 00:49:15 -- scripts/common.sh@352 -- # local d=1 00:06:40.650 00:49:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.650 00:49:15 -- scripts/common.sh@354 -- # echo 1 00:06:40.650 00:49:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:40.650 00:49:15 -- scripts/common.sh@365 -- # decimal 2 00:06:40.650 00:49:15 -- scripts/common.sh@352 -- # local d=2 00:06:40.650 00:49:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.650 00:49:15 -- scripts/common.sh@354 -- # echo 2 00:06:40.650 00:49:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:40.650 00:49:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:40.650 00:49:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:40.650 00:49:15 -- scripts/common.sh@367 -- # return 0 00:06:40.650 00:49:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.650 00:49:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.650 --rc genhtml_branch_coverage=1 00:06:40.650 --rc genhtml_function_coverage=1 00:06:40.650 --rc genhtml_legend=1 00:06:40.650 --rc geninfo_all_blocks=1 00:06:40.650 --rc geninfo_unexecuted_blocks=1 00:06:40.650 00:06:40.650 ' 00:06:40.650 00:49:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.650 --rc genhtml_branch_coverage=1 00:06:40.650 --rc genhtml_function_coverage=1 00:06:40.650 --rc genhtml_legend=1 00:06:40.650 --rc geninfo_all_blocks=1 00:06:40.650 --rc geninfo_unexecuted_blocks=1 00:06:40.650 00:06:40.650 ' 00:06:40.650 00:49:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.650 --rc genhtml_branch_coverage=1 00:06:40.650 --rc genhtml_function_coverage=1 00:06:40.650 --rc genhtml_legend=1 00:06:40.650 --rc geninfo_all_blocks=1 00:06:40.650 --rc geninfo_unexecuted_blocks=1 00:06:40.650 00:06:40.650 ' 00:06:40.650 00:49:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.650 --rc genhtml_branch_coverage=1 00:06:40.650 --rc genhtml_function_coverage=1 00:06:40.650 --rc genhtml_legend=1 00:06:40.650 --rc geninfo_all_blocks=1 00:06:40.650 --rc geninfo_unexecuted_blocks=1 00:06:40.650 00:06:40.650 ' 00:06:40.650 00:49:15 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:40.650 00:49:15 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:06:40.650 00:49:15 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:06:40.651 00:49:15 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:06:40.651 00:49:15 -- unit/unittest.sh@174 -- # [[ y == y ]] 00:06:40.651 00:49:15 -- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:40.651 00:49:15 -- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:40.651 00:49:15 -- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:55.575 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:55.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:55.575 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:55.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:55.575 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:55.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:07:27.663 00:49:57 -- unit/unittest.sh@182 -- # uname -m 00:07:27.663 00:49:57 -- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']' 00:07:27.663 00:49:57 -- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:27.663 00:49:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.663 00:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.663 00:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.663 ************************************ 00:07:27.663 START TEST unittest_pci_event 00:07:27.663 ************************************ 00:07:27.663 00:49:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:27.663 00:07:27.663 00:07:27.663 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.663 http://cunit.sourceforge.net/ 00:07:27.663 00:07:27.663 00:07:27.663 Suite: pci_event 00:07:27.663 Test: test_pci_parse_event ...[2024-11-18 00:49:57.608255] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:07:27.663 [2024-11-18 00:49:57.609053] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:07:27.663 passed 00:07:27.663 00:07:27.663 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.663 suites 1 1 n/a 0 0 00:07:27.663 tests 1 1 1 0 0 00:07:27.663 asserts 15 15 15 0 n/a 00:07:27.663 00:07:27.663 Elapsed time = 0.001 seconds 00:07:27.663 00:07:27.663 real 0m0.048s 00:07:27.663 user 0m0.023s 00:07:27.663 sys 0m0.022s 00:07:27.663 00:49:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.663 00:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.663 ************************************ 00:07:27.663 END TEST unittest_pci_event 00:07:27.663 ************************************ 00:07:27.663 00:49:57 -- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:27.663 00:49:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.663 00:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.663 00:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.663 ************************************ 00:07:27.663 START TEST unittest_include 00:07:27.663 ************************************ 00:07:27.663 00:49:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:27.663 00:07:27.663 00:07:27.663 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.663 http://cunit.sourceforge.net/ 00:07:27.663 00:07:27.663 00:07:27.663 Suite: histogram 00:07:27.663 Test: histogram_test ...passed 00:07:27.663 Test: histogram_merge ...passed 00:07:27.663 00:07:27.663 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.663 suites 1 1 n/a 0 0 00:07:27.663 tests 2 2 2 0 0 00:07:27.663 asserts 50 50 50 0 n/a 00:07:27.663 00:07:27.663 Elapsed time = 0.006 seconds 00:07:27.663 00:07:27.663 real 0m0.045s 00:07:27.663 user 0m0.029s 00:07:27.663 sys 0m0.017s 00:07:27.663 00:49:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.663 ************************************ 00:07:27.663 END TEST unittest_include 00:07:27.663 ************************************ 00:07:27.663 00:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.663 00:49:57 -- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev 00:07:27.663 00:49:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.663 00:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.663 00:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.663 ************************************ 00:07:27.664 START TEST unittest_bdev 00:07:27.664 ************************************ 00:07:27.664 00:49:57 -- common/autotest_common.sh@1114 -- # unittest_bdev 00:07:27.664 00:49:57 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:07:27.664 00:07:27.664 00:07:27.664 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.664 http://cunit.sourceforge.net/ 00:07:27.664 00:07:27.664 00:07:27.664 Suite: bdev 00:07:27.664 Test: bytes_to_blocks_test ...passed 00:07:27.664 Test: num_blocks_test ...passed 00:07:27.664 Test: io_valid_test ...passed 00:07:27.664 Test: open_write_test ...[2024-11-18 00:49:57.936573] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:07:27.664 [2024-11-18 00:49:57.937055] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:07:27.664 [2024-11-18 00:49:57.937222] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:07:27.664 passed 00:07:27.664 Test: claim_test ...passed 00:07:27.664 Test: alias_add_del_test ...[2024-11-18 00:49:58.096883] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:07:27.664 [2024-11-18 00:49:58.097104] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:07:27.664 [2024-11-18 00:49:58.097183] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:07:27.664 passed 00:07:27.664 Test: get_device_stat_test ...passed 00:07:27.664 Test: bdev_io_types_test ...passed 00:07:27.664 Test: bdev_io_wait_test ...passed 00:07:27.664 Test: bdev_io_spans_split_test ...passed 00:07:27.664 Test: bdev_io_boundary_split_test ...passed 00:07:27.664 Test: bdev_io_max_size_and_segment_split_test ...[2024-11-18 00:49:58.334847] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:07:27.664 passed 00:07:27.664 Test: bdev_io_mix_split_test ...passed 00:07:27.664 Test: bdev_io_split_with_io_wait ...passed 00:07:27.664 Test: bdev_io_write_unit_split_test ...[2024-11-18 00:49:58.508952] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:27.664 [2024-11-18 00:49:58.509079] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:27.664 [2024-11-18 00:49:58.509114] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:07:27.664 [2024-11-18 00:49:58.509156] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:07:27.664 passed 00:07:27.664 Test: bdev_io_alignment_with_boundary ...passed 00:07:27.664 Test: bdev_io_alignment ...passed 00:07:27.664 Test: bdev_histograms ...passed 00:07:27.664 Test: bdev_write_zeroes ...passed 00:07:27.664 Test: bdev_compare_and_write ...passed 00:07:27.664 Test: bdev_compare ...passed 00:07:27.664 Test: bdev_compare_emulated ...passed 00:07:27.664 Test: bdev_zcopy_write ...passed 00:07:27.664 Test: bdev_zcopy_read ...passed 00:07:27.664 Test: bdev_open_while_hotremove ...passed 00:07:27.664 Test: bdev_close_while_hotremove ...passed 00:07:27.664 Test: bdev_open_ext_test ...[2024-11-18 00:49:59.224504] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:27.664 passed 00:07:27.664 Test: bdev_open_ext_unregister ...passed 00:07:27.664 Test: bdev_set_io_timeout ...[2024-11-18 00:49:59.224736] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:27.664 passed 00:07:27.664 Test: bdev_set_qd_sampling ...passed 00:07:27.664 Test: lba_range_overlap ...passed 00:07:27.664 Test: lock_lba_range_check_ranges ...passed 00:07:27.664 Test: lock_lba_range_with_io_outstanding ...passed 00:07:27.664 Test: lock_lba_range_overlapped ...passed 00:07:27.664 Test: bdev_quiesce ...[2024-11-18 00:49:59.522102] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:07:27.664 passed 00:07:27.664 Test: bdev_io_abort ...passed 00:07:27.664 Test: bdev_unmap ...passed 00:07:27.664 Test: bdev_write_zeroes_split_test ...passed 00:07:27.664 Test: bdev_set_options_test ...passed 00:07:27.664 Test: bdev_get_memory_domains ...passed 00:07:27.664 Test: bdev_io_ext ...[2024-11-18 00:49:59.733017] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:07:27.664 passed 00:07:27.664 Test: bdev_io_ext_no_opts ...passed 00:07:27.664 Test: bdev_io_ext_invalid_opts ...passed 00:07:27.664 Test: bdev_io_ext_split ...passed 00:07:27.664 Test: bdev_io_ext_bounce_buffer ...passed 00:07:27.664 Test: bdev_register_uuid_alias ...[2024-11-18 00:50:00.041796] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 02ff75a5-36e7-4c51-b3f4-8639d931e445 already exists 00:07:27.664 [2024-11-18 00:50:00.041903] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:02ff75a5-36e7-4c51-b3f4-8639d931e445 alias for bdev bdev0 00:07:27.664 passed 00:07:27.664 Test: bdev_unregister_by_name ...[2024-11-18 00:50:00.075055] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:07:27.664 [2024-11-18 00:50:00.075139] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:07:27.664 passed 00:07:27.664 Test: for_each_bdev_test ...passed 00:07:27.664 Test: bdev_seek_test ...passed 00:07:27.664 Test: bdev_copy ...passed 00:07:27.664 Test: bdev_copy_split_test ...passed 00:07:27.664 Test: examine_locks ...passed 00:07:27.664 Test: claim_v2_rwo ...[2024-11-18 00:50:00.241219] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241304] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:27.664 passed 00:07:27.664 Test: claim_v2_rom ...[2024-11-18 00:50:00.241324] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241381] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241398] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241449] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:07:27.664 [2024-11-18 00:50:00.241557] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241605] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:27.664 passed 00:07:27.664 Test: claim_v2_rwm ...[2024-11-18 00:50:00.241636] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241737] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:07:27.664 [2024-11-18 00:50:00.241767] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:27.664 [2024-11-18 00:50:00.241856] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:27.664 [2024-11-18 00:50:00.241901] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:27.664 passed 00:07:27.664 Test: claim_v2_existing_writer ...[2024-11-18 00:50:00.241926] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241947] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241964] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.241987] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.242023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:27.664 [2024-11-18 00:50:00.242122] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:27.664 [2024-11-18 00:50:00.242157] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:27.664 passed 00:07:27.664 Test: claim_v2_existing_v1 ...passed 00:07:27.664 Test: claim_v1_existing_v2 ...[2024-11-18 00:50:00.242247] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.242272] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.242288] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.242377] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:27.664 [2024-11-18 00:50:00.242425] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:27.664 passed 00:07:27.664 Test: examine_claimed ...[2024-11-18 00:50:00.242455] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:27.664 passed 00:07:27.664 00:07:27.664 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.664 suites 1 1 n/a 0 0 00:07:27.664 tests 59 59 59 0 0 00:07:27.665 asserts 4599 4599 4599 0 n/a 00:07:27.665 00:07:27.665 Elapsed time = 2.412 seconds 00:07:27.665 [2024-11-18 00:50:00.242691] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:07:27.665 00:50:00 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:07:27.665 00:07:27.665 00:07:27.665 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.665 http://cunit.sourceforge.net/ 00:07:27.665 00:07:27.665 00:07:27.665 Suite: nvme 00:07:27.665 Test: test_create_ctrlr ...passed 00:07:27.665 Test: test_reset_ctrlr ...[2024-11-18 00:50:00.301801] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:07:27.665 Test: test_failover_ctrlr ...passed 00:07:27.665 Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-18 00:50:00.303970] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.304160] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.304326] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_pending_reset ...[2024-11-18 00:50:00.305693] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.305942] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_attach_ctrlr ...[2024-11-18 00:50:00.307029] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:07:27.665 passed 00:07:27.665 Test: test_aer_cb ...passed 00:07:27.665 Test: test_submit_nvme_cmd ...passed 00:07:27.665 Test: test_add_remove_trid ...passed 00:07:27.665 Test: test_abort ...[2024-11-18 00:50:00.309993] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:07:27.665 passed 00:07:27.665 Test: test_get_io_qpair ...passed 00:07:27.665 Test: test_bdev_unregister ...passed 00:07:27.665 Test: test_compare_ns ...passed 00:07:27.665 Test: test_init_ana_log_page ...passed 00:07:27.665 Test: test_get_memory_domains ...passed 00:07:27.665 Test: test_reconnect_qpair ...[2024-11-18 00:50:00.312295] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_create_bdev_ctrlr ...[2024-11-18 00:50:00.312696] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:07:27.665 passed 00:07:27.665 Test: test_add_multi_ns_to_bdev ...[2024-11-18 00:50:00.313713] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:07:27.665 passed 00:07:27.665 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:07:27.665 Test: test_admin_path ...passed 00:07:27.665 Test: test_reset_bdev_ctrlr ...passed 00:07:27.665 Test: test_find_io_path ...passed 00:07:27.665 Test: test_retry_io_if_ana_state_is_updating ...passed 00:07:27.665 Test: test_retry_io_for_io_path_error ...passed 00:07:27.665 Test: test_retry_io_count ...passed 00:07:27.665 Test: test_concurrent_read_ana_log_page ...passed 00:07:27.665 Test: test_retry_io_for_ana_error ...passed 00:07:27.665 Test: test_check_io_error_resiliency_params ...[2024-11-18 00:50:00.319538] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:07:27.665 [2024-11-18 00:50:00.319601] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:27.665 [2024-11-18 00:50:00.319628] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:27.665 [2024-11-18 00:50:00.319652] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:07:27.665 [2024-11-18 00:50:00.319687] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:27.665 [2024-11-18 00:50:00.319712] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:27.665 [2024-11-18 00:50:00.319736] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:07:27.665 passed 00:07:27.665 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-11-18 00:50:00.319779] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:07:27.665 [2024-11-18 00:50:00.319808] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:07:27.665 passed 00:07:27.665 Test: test_reconnect_ctrlr ...[2024-11-18 00:50:00.320417] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.320528] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.320756] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.320832] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.320930] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_retry_failover_ctrlr ...[2024-11-18 00:50:00.321208] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_fail_path ...[2024-11-18 00:50:00.321640] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.321775] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.321868] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.321969] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.322083] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_nvme_ns_cmp ...passed 00:07:27.665 Test: test_ana_transition ...passed 00:07:27.665 Test: test_set_preferred_path ...passed 00:07:27.665 Test: test_find_next_io_path ...passed 00:07:27.665 Test: test_find_io_path_min_qd ...passed 00:07:27.665 Test: test_disable_auto_failback ...[2024-11-18 00:50:00.323592] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_set_multipath_policy ...passed 00:07:27.665 Test: test_uuid_generation ...passed 00:07:27.665 Test: test_retry_io_to_same_path ...passed 00:07:27.665 Test: test_race_between_reset_and_disconnected ...passed 00:07:27.665 Test: test_ctrlr_op_rpc ...passed 00:07:27.665 Test: test_bdev_ctrlr_op_rpc ...passed 00:07:27.665 Test: test_disable_enable_ctrlr ...[2024-11-18 00:50:00.326255] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 [2024-11-18 00:50:00.326348] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:27.665 passed 00:07:27.665 Test: test_delete_ctrlr_done ...passed 00:07:27.665 Test: test_ns_remove_during_reset ...passed 00:07:27.665 00:07:27.665 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.665 suites 1 1 n/a 0 0 00:07:27.665 tests 48 48 48 0 0 00:07:27.665 asserts 3553 3553 3553 0 n/a 00:07:27.665 00:07:27.665 Elapsed time = 0.026 seconds 00:07:27.665 00:50:00 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:07:27.665 Test Options 00:07:27.665 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:07:27.665 00:07:27.665 00:07:27.665 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.665 http://cunit.sourceforge.net/ 00:07:27.665 00:07:27.665 00:07:27.665 Suite: raid 00:07:27.665 Test: test_create_raid ...passed 00:07:27.665 Test: test_create_raid_superblock ...passed 00:07:27.665 Test: test_delete_raid ...passed 00:07:27.665 Test: test_create_raid_invalid_args ...[2024-11-18 00:50:00.390678] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:07:27.665 [2024-11-18 00:50:00.391238] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:07:27.665 [2024-11-18 00:50:00.391805] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:07:27.665 [2024-11-18 00:50:00.392167] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:27.665 [2024-11-18 00:50:00.393146] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:27.665 passed 00:07:27.665 Test: test_delete_raid_invalid_args ...passed 00:07:27.665 Test: test_io_channel ...passed 00:07:27.665 Test: test_reset_io ...passed 00:07:27.665 Test: test_write_io ...passed 00:07:27.665 Test: test_read_io ...passed 00:07:27.665 Test: test_unmap_io ...passed 00:07:27.665 Test: test_io_failure ...[2024-11-18 00:50:01.624903] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:07:27.665 passed 00:07:27.665 Test: test_multi_raid_no_io ...passed 00:07:27.666 Test: test_multi_raid_with_io ...passed 00:07:27.666 Test: test_io_type_supported ...passed 00:07:27.666 Test: test_raid_json_dump_info ...passed 00:07:27.666 Test: test_context_size ...passed 00:07:27.666 Test: test_raid_level_conversions ...passed 00:07:27.666 Test: test_raid_process ...passed 00:07:27.666 Test: test_raid_io_split ...passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 19 19 19 0 0 00:07:27.666 asserts 177879 177879 177879 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 1.248 seconds 00:07:27.666 00:50:01 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:07:27.666 00:07:27.666 00:07:27.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.666 http://cunit.sourceforge.net/ 00:07:27.666 00:07:27.666 00:07:27.666 Suite: raid_sb 00:07:27.666 Test: test_raid_bdev_write_superblock ...passed 00:07:27.666 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:27.666 Test: test_raid_bdev_parse_superblock ...[2024-11-18 00:50:01.689084] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:27.666 passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 3 3 3 0 0 00:07:27.666 asserts 32 32 32 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 0.001 seconds 00:07:27.666 00:50:01 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:07:27.666 00:07:27.666 00:07:27.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.666 http://cunit.sourceforge.net/ 00:07:27.666 00:07:27.666 00:07:27.666 Suite: concat 00:07:27.666 Test: test_concat_start ...passed 00:07:27.666 Test: test_concat_rw ...passed 00:07:27.666 Test: test_concat_null_payload ...passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 3 3 3 0 0 00:07:27.666 asserts 8097 8097 8097 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 0.008 seconds 00:07:27.666 00:50:01 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:07:27.666 00:07:27.666 00:07:27.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.666 http://cunit.sourceforge.net/ 00:07:27.666 00:07:27.666 00:07:27.666 Suite: raid1 00:07:27.666 Test: test_raid1_start ...passed 00:07:27.666 Test: test_raid1_read_balancing ...passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 2 2 2 0 0 00:07:27.666 asserts 2856 2856 2856 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 0.004 seconds 00:07:27.666 00:50:01 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:07:27.666 00:07:27.666 00:07:27.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.666 http://cunit.sourceforge.net/ 00:07:27.666 00:07:27.666 00:07:27.666 Suite: zone 00:07:27.666 Test: test_zone_get_operation ...passed 00:07:27.666 Test: test_bdev_zone_get_info ...passed 00:07:27.666 Test: test_bdev_zone_management ...passed 00:07:27.666 Test: test_bdev_zone_append ...passed 00:07:27.666 Test: test_bdev_zone_append_with_md ...passed 00:07:27.666 Test: test_bdev_zone_appendv ...passed 00:07:27.666 Test: test_bdev_zone_appendv_with_md ...passed 00:07:27.666 Test: test_bdev_io_get_append_location ...passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 8 8 8 0 0 00:07:27.666 asserts 94 94 94 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 0.001 seconds 00:07:27.666 00:50:01 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:07:27.666 00:07:27.666 00:07:27.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.666 http://cunit.sourceforge.net/ 00:07:27.666 00:07:27.666 00:07:27.666 Suite: gpt_parse 00:07:27.666 Test: test_parse_mbr_and_primary ...[2024-11-18 00:50:01.874251] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:27.666 [2024-11-18 00:50:01.874674] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:27.666 [2024-11-18 00:50:01.874753] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:27.666 [2024-11-18 00:50:01.874868] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:27.666 [2024-11-18 00:50:01.874932] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:27.666 [2024-11-18 00:50:01.875050] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:27.666 passed 00:07:27.666 Test: test_parse_secondary ...[2024-11-18 00:50:01.875812] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:27.666 [2024-11-18 00:50:01.875882] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:27.666 [2024-11-18 00:50:01.875933] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:27.666 [2024-11-18 00:50:01.875985] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:27.666 passed 00:07:27.666 Test: test_check_mbr ...[2024-11-18 00:50:01.876737] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:27.666 passed 00:07:27.666 Test: test_read_header ...[2024-11-18 00:50:01.876807] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:27.666 [2024-11-18 00:50:01.876883] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:07:27.666 [2024-11-18 00:50:01.877018] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:07:27.666 [2024-11-18 00:50:01.877130] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:07:27.666 [2024-11-18 00:50:01.877196] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:07:27.666 [2024-11-18 00:50:01.877250] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:07:27.666 [2024-11-18 00:50:01.877304] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:07:27.666 passed 00:07:27.666 Test: test_read_partitions ...[2024-11-18 00:50:01.877381] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:07:27.666 [2024-11-18 00:50:01.877452] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:07:27.666 [2024-11-18 00:50:01.877524] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:07:27.666 [2024-11-18 00:50:01.877569] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:07:27.666 [2024-11-18 00:50:01.877961] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:07:27.666 passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 5 5 5 0 0 00:07:27.666 asserts 33 33 33 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 0.005 seconds 00:07:27.666 00:50:01 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:07:27.666 00:07:27.666 00:07:27.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.666 http://cunit.sourceforge.net/ 00:07:27.666 00:07:27.666 00:07:27.666 Suite: bdev_part 00:07:27.666 Test: part_test ...[2024-11-18 00:50:01.922396] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:07:27.666 passed 00:07:27.666 Test: part_free_test ...passed 00:07:27.666 Test: part_get_io_channel_test ...passed 00:07:27.666 Test: part_construct_ext ...passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 4 4 4 0 0 00:07:27.666 asserts 48 48 48 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 0.077 seconds 00:07:27.666 00:50:02 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:07:27.666 00:07:27.666 00:07:27.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.666 http://cunit.sourceforge.net/ 00:07:27.666 00:07:27.666 00:07:27.666 Suite: scsi_nvme_suite 00:07:27.666 Test: scsi_nvme_translate_test ...passed 00:07:27.666 00:07:27.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.666 suites 1 1 n/a 0 0 00:07:27.666 tests 1 1 1 0 0 00:07:27.666 asserts 104 104 104 0 n/a 00:07:27.666 00:07:27.666 Elapsed time = 0.000 seconds 00:07:27.927 00:50:02 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:07:27.927 00:07:27.927 00:07:27.927 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.927 http://cunit.sourceforge.net/ 00:07:27.927 00:07:27.927 00:07:27.927 Suite: lvol 00:07:27.927 Test: ut_lvs_init ...[2024-11-18 00:50:02.089807] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:07:27.927 [2024-11-18 00:50:02.090412] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:07:27.927 passed 00:07:27.927 Test: ut_lvol_init ...passed 00:07:27.927 Test: ut_lvol_snapshot ...passed 00:07:27.927 Test: ut_lvol_clone ...passed 00:07:27.927 Test: ut_lvs_destroy ...passed 00:07:27.927 Test: ut_lvs_unload ...passed 00:07:27.927 Test: ut_lvol_resize ...[2024-11-18 00:50:02.092594] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:07:27.927 passed 00:07:27.927 Test: ut_lvol_set_read_only ...passed 00:07:27.927 Test: ut_lvol_hotremove ...passed 00:07:27.927 Test: ut_vbdev_lvol_get_io_channel ...passed 00:07:27.927 Test: ut_vbdev_lvol_io_type_supported ...passed 00:07:27.927 Test: ut_lvol_read_write ...passed 00:07:27.927 Test: ut_vbdev_lvol_submit_request ...passed 00:07:27.927 Test: ut_lvol_examine_config ...passed 00:07:27.927 Test: ut_lvol_examine_disk ...[2024-11-18 00:50:02.093582] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:07:27.927 passed 00:07:27.927 Test: ut_lvol_rename ...[2024-11-18 00:50:02.094991] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:07:27.927 [2024-11-18 00:50:02.095137] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:07:27.927 passed 00:07:27.927 Test: ut_bdev_finish ...passed 00:07:27.927 Test: ut_lvs_rename ...passed 00:07:27.927 Test: ut_lvol_seek ...passed 00:07:27.927 Test: ut_esnap_dev_create ...[2024-11-18 00:50:02.096127] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:07:27.927 [2024-11-18 00:50:02.096228] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:07:27.927 [2024-11-18 00:50:02.096272] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:07:27.927 passed 00:07:27.927 Test: ut_lvol_esnap_clone_bad_args ...[2024-11-18 00:50:02.096336] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:07:27.927 [2024-11-18 00:50:02.096537] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:07:27.927 [2024-11-18 00:50:02.096590] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:07:27.927 passed 00:07:27.927 00:07:27.927 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.927 suites 1 1 n/a 0 0 00:07:27.927 tests 21 21 21 0 0 00:07:27.927 asserts 712 712 712 0 n/a 00:07:27.927 00:07:27.927 Elapsed time = 0.007 seconds 00:07:27.927 00:50:02 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:07:27.927 00:07:27.927 00:07:27.927 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.927 http://cunit.sourceforge.net/ 00:07:27.927 00:07:27.927 00:07:27.927 Suite: zone_block 00:07:27.927 Test: test_zone_block_create ...passed 00:07:27.928 Test: test_zone_block_create_invalid ...[2024-11-18 00:50:02.173792] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:07:27.928 [2024-11-18 00:50:02.174288] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 00:50:02.174551] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:07:27.928 [2024-11-18 00:50:02.174645] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 00:50:02.174866] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:07:27.928 [2024-11-18 00:50:02.174927] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-18 00:50:02.175063] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:07:27.928 [2024-11-18 00:50:02.175151] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:07:27.928 Test: test_get_zone_info ...[2024-11-18 00:50:02.175896] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.176001] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.176073] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 passed 00:07:27.928 Test: test_supported_io_types ...passed 00:07:27.928 Test: test_reset_zone ...[2024-11-18 00:50:02.177227] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.177315] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 passed 00:07:27.928 Test: test_open_zone ...[2024-11-18 00:50:02.177918] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.178767] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.178874] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 passed 00:07:27.928 Test: test_zone_write ...[2024-11-18 00:50:02.179501] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:27.928 [2024-11-18 00:50:02.179593] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.179666] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:27.928 [2024-11-18 00:50:02.179740] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.187786] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:07:27.928 [2024-11-18 00:50:02.187862] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.187974] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:07:27.928 [2024-11-18 00:50:02.188030] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.195960] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:27.928 passed 00:07:27.928 Test: test_zone_read ...[2024-11-18 00:50:02.196057] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.196714] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:07:27.928 [2024-11-18 00:50:02.196777] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.196884] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:07:27.928 [2024-11-18 00:50:02.196934] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.197578] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:07:27.928 [2024-11-18 00:50:02.197634] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 passed 00:07:27.928 Test: test_close_zone ...[2024-11-18 00:50:02.198163] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.198264] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.198595] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.198660] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 passed 00:07:27.928 Test: test_finish_zone ...[2024-11-18 00:50:02.199441] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.199502] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 passed 00:07:27.928 Test: test_append_zone ...[2024-11-18 00:50:02.200000] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:27.928 [2024-11-18 00:50:02.200065] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.200128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:27.928 [2024-11-18 00:50:02.200192] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 [2024-11-18 00:50:02.215622] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:27.928 passed 00:07:27.928 00:07:27.928 [2024-11-18 00:50:02.215717] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:27.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.928 suites 1 1 n/a 0 0 00:07:27.928 tests 11 11 11 0 0 00:07:27.928 asserts 3437 3437 3437 0 n/a 00:07:27.928 00:07:27.928 Elapsed time = 0.044 seconds 00:07:27.928 00:50:02 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:07:27.928 00:07:27.928 00:07:27.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.928 http://cunit.sourceforge.net/ 00:07:27.928 00:07:27.928 00:07:27.928 Suite: bdev 00:07:28.186 Test: basic ...[2024-11-18 00:50:02.366155] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55c851007401): Operation not permitted (rc=-1) 00:07:28.187 [2024-11-18 00:50:02.366659] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55c8510073c0): Operation not permitted (rc=-1) 00:07:28.187 [2024-11-18 00:50:02.366727] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55c851007401): Operation not permitted (rc=-1) 00:07:28.187 passed 00:07:28.187 Test: unregister_and_close ...passed 00:07:28.187 Test: unregister_and_close_different_threads ...passed 00:07:28.445 Test: basic_qos ...passed 00:07:28.445 Test: put_channel_during_reset ...passed 00:07:28.445 Test: aborted_reset ...passed 00:07:28.445 Test: aborted_reset_no_outstanding_io ...passed 00:07:28.704 Test: io_during_reset ...passed 00:07:28.704 Test: reset_completions ...passed 00:07:28.704 Test: io_during_qos_queue ...passed 00:07:28.704 Test: io_during_qos_reset ...passed 00:07:28.963 Test: enomem ...passed 00:07:28.963 Test: enomem_multi_bdev ...passed 00:07:28.963 Test: enomem_multi_bdev_unregister ...passed 00:07:28.963 Test: enomem_multi_io_target ...passed 00:07:29.223 Test: qos_dynamic_enable ...passed 00:07:29.223 Test: bdev_histograms_mt ...passed 00:07:29.223 Test: bdev_set_io_timeout_mt ...[2024-11-18 00:50:03.521317] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:07:29.223 passed 00:07:29.223 Test: lock_lba_range_then_submit_io ...[2024-11-18 00:50:03.548271] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x55c851007380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:07:29.223 passed 00:07:29.482 Test: unregister_during_reset ...passed 00:07:29.482 Test: event_notify_and_close ...passed 00:07:29.482 Test: unregister_and_qos_poller ...passed 00:07:29.482 Suite: bdev_wrong_thread 00:07:29.482 Test: spdk_bdev_register_wt ...[2024-11-18 00:50:03.763917] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:07:29.482 passed 00:07:29.482 Test: spdk_bdev_examine_wt ...[2024-11-18 00:50:03.764267] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:07:29.482 passed 00:07:29.482 00:07:29.482 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.482 suites 2 2 n/a 0 0 00:07:29.482 tests 24 24 24 0 0 00:07:29.482 asserts 621 621 621 0 n/a 00:07:29.482 00:07:29.482 Elapsed time = 1.440 seconds 00:07:29.482 00:07:29.482 real 0m6.002s 00:07:29.482 user 0m2.487s 00:07:29.482 sys 0m3.514s 00:07:29.482 00:50:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.482 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:07:29.482 ************************************ 00:07:29.482 END TEST unittest_bdev 00:07:29.482 ************************************ 00:07:29.482 00:50:03 -- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:29.482 00:50:03 -- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:29.482 00:50:03 -- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:29.482 00:50:03 -- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:29.482 00:50:03 -- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:29.482 00:50:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.482 00:50:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.482 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:07:29.482 ************************************ 00:07:29.482 START TEST unittest_bdev_raid5f 00:07:29.482 ************************************ 00:07:29.482 00:50:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:29.742 00:07:29.742 00:07:29.742 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.742 http://cunit.sourceforge.net/ 00:07:29.742 00:07:29.742 00:07:29.742 Suite: raid5f 00:07:29.742 Test: test_raid5f_start ...passed 00:07:30.311 Test: test_raid5f_submit_read_request ...passed 00:07:30.570 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:07:33.866 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:51.971 Test: test_raid5f_chunk_write_error ...passed 00:08:00.094 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:08:03.384 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:08:29.946 Test: test_raid5f_submit_read_request_degraded ...passed 00:08:29.946 00:08:29.946 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.946 suites 1 1 n/a 0 0 00:08:29.946 tests 8 8 8 0 0 00:08:29.946 asserts 351864 351864 351864 0 n/a 00:08:29.946 00:08:29.946 Elapsed time = 57.715 seconds 00:08:29.946 00:08:29.946 real 0m57.842s 00:08:29.946 user 0m53.875s 00:08:29.946 sys 0m3.960s 00:08:29.946 00:51:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.946 00:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:29.946 ************************************ 00:08:29.946 END TEST unittest_bdev_raid5f 00:08:29.946 ************************************ 00:08:29.946 00:51:01 -- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob 00:08:29.946 00:51:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.946 00:51:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.946 00:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:29.946 ************************************ 00:08:29.946 START TEST unittest_blob_blobfs 00:08:29.946 ************************************ 00:08:29.946 00:51:01 -- common/autotest_common.sh@1114 -- # unittest_blob 00:08:29.946 00:51:01 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:08:29.946 00:51:01 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:08:29.946 00:08:29.946 00:08:29.946 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.946 http://cunit.sourceforge.net/ 00:08:29.946 00:08:29.946 00:08:29.946 Suite: blob_nocopy_noextent 00:08:29.946 Test: blob_init ...[2024-11-18 00:51:01.830490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:29.946 passed 00:08:29.946 Test: blob_thin_provision ...passed 00:08:29.946 Test: blob_read_only ...passed 00:08:29.946 Test: bs_load ...[2024-11-18 00:51:01.988554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:29.946 passed 00:08:29.946 Test: bs_load_custom_cluster_size ...passed 00:08:29.946 Test: bs_load_after_failed_grow ...passed 00:08:29.946 Test: bs_cluster_sz ...[2024-11-18 00:51:02.040899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:29.946 [2024-11-18 00:51:02.041504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:29.946 [2024-11-18 00:51:02.041746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:29.946 passed 00:08:29.946 Test: bs_resize_md ...passed 00:08:29.946 Test: bs_destroy ...passed 00:08:29.946 Test: bs_type ...passed 00:08:29.946 Test: bs_super_block ...passed 00:08:29.946 Test: bs_test_recover_cluster_count ...passed 00:08:29.946 Test: bs_grow_live ...passed 00:08:29.946 Test: bs_grow_live_no_space ...passed 00:08:29.946 Test: bs_test_grow ...passed 00:08:29.946 Test: blob_serialize_test ...passed 00:08:29.946 Test: super_block_crc ...passed 00:08:29.946 Test: blob_thin_prov_write_count_io ...passed 00:08:29.946 Test: bs_load_iter_test ...passed 00:08:29.946 Test: blob_relations ...[2024-11-18 00:51:02.315691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:29.946 [2024-11-18 00:51:02.315818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.946 [2024-11-18 00:51:02.316796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:29.946 [2024-11-18 00:51:02.316875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.946 passed 00:08:29.946 Test: blob_relations2 ...[2024-11-18 00:51:02.341539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:29.947 [2024-11-18 00:51:02.341636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 [2024-11-18 00:51:02.341683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:29.947 [2024-11-18 00:51:02.341704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 [2024-11-18 00:51:02.343158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:29.947 [2024-11-18 00:51:02.343233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 [2024-11-18 00:51:02.343659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:29.947 [2024-11-18 00:51:02.343720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 passed 00:08:29.947 Test: blob_relations3 ...passed 00:08:29.947 Test: blobstore_clean_power_failure ...passed 00:08:29.947 Test: blob_delete_snapshot_power_failure ...[2024-11-18 00:51:02.617726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:29.947 [2024-11-18 00:51:02.638145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:29.947 [2024-11-18 00:51:02.638251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:29.947 [2024-11-18 00:51:02.638310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 [2024-11-18 00:51:02.658364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:29.947 [2024-11-18 00:51:02.658472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:29.947 [2024-11-18 00:51:02.658529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:29.947 [2024-11-18 00:51:02.658568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 [2024-11-18 00:51:02.678811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:29.947 [2024-11-18 00:51:02.678964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 [2024-11-18 00:51:02.699173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:29.947 [2024-11-18 00:51:02.699313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 [2024-11-18 00:51:02.719758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:29.947 [2024-11-18 00:51:02.719886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:29.947 passed 00:08:29.947 Test: blob_create_snapshot_power_failure ...[2024-11-18 00:51:02.780557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:29.947 [2024-11-18 00:51:02.821703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:29.947 [2024-11-18 00:51:02.842838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:29.947 passed 00:08:29.947 Test: blob_io_unit ...passed 00:08:29.947 Test: blob_io_unit_compatibility ...passed 00:08:29.947 Test: blob_ext_md_pages ...passed 00:08:29.947 Test: blob_esnap_io_4096_4096 ...passed 00:08:29.947 Test: blob_esnap_io_512_512 ...passed 00:08:29.947 Test: blob_esnap_io_4096_512 ...passed 00:08:29.947 Test: blob_esnap_io_512_4096 ...passed 00:08:29.947 Suite: blob_bs_nocopy_noextent 00:08:29.947 Test: blob_open ...passed 00:08:29.947 Test: blob_create ...[2024-11-18 00:51:03.251977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:29.947 passed 00:08:29.947 Test: blob_create_loop ...passed 00:08:29.947 Test: blob_create_fail ...[2024-11-18 00:51:03.395328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:29.947 passed 00:08:29.947 Test: blob_create_internal ...passed 00:08:29.947 Test: blob_create_zero_extent ...passed 00:08:29.947 Test: blob_snapshot ...passed 00:08:29.947 Test: blob_clone ...passed 00:08:29.947 Test: blob_inflate ...[2024-11-18 00:51:03.711211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:29.947 passed 00:08:29.947 Test: blob_delete ...passed 00:08:29.947 Test: blob_resize_test ...[2024-11-18 00:51:03.830532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:29.947 passed 00:08:29.947 Test: channel_ops ...passed 00:08:29.947 Test: blob_super ...passed 00:08:29.947 Test: blob_rw_verify_iov ...passed 00:08:29.947 Test: blob_unmap ...passed 00:08:29.947 Test: blob_iter ...passed 00:08:29.947 Test: blob_parse_md ...passed 00:08:29.947 Test: bs_load_pending_removal ...passed 00:08:29.947 Test: bs_unload ...[2024-11-18 00:51:04.306660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:29.947 passed 00:08:30.206 Test: bs_usable_clusters ...passed 00:08:30.206 Test: blob_crc ...[2024-11-18 00:51:04.422939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:30.206 [2024-11-18 00:51:04.423111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:30.206 passed 00:08:30.206 Test: blob_flags ...passed 00:08:30.206 Test: bs_version ...passed 00:08:30.465 Test: blob_set_xattrs_test ...[2024-11-18 00:51:04.610163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:30.466 [2024-11-18 00:51:04.610286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:30.466 passed 00:08:30.466 Test: blob_thin_prov_alloc ...passed 00:08:30.466 Test: blob_insert_cluster_msg_test ...passed 00:08:30.725 Test: blob_thin_prov_rw ...passed 00:08:30.725 Test: blob_thin_prov_rle ...passed 00:08:30.725 Test: blob_thin_prov_rw_iov ...passed 00:08:30.725 Test: blob_snapshot_rw ...passed 00:08:30.983 Test: blob_snapshot_rw_iov ...passed 00:08:31.242 Test: blob_inflate_rw ...passed 00:08:31.242 Test: blob_snapshot_freeze_io ...passed 00:08:31.242 Test: blob_operation_split_rw ...passed 00:08:31.501 Test: blob_operation_split_rw_iov ...passed 00:08:31.501 Test: blob_simultaneous_operations ...[2024-11-18 00:51:05.854113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:31.501 [2024-11-18 00:51:05.854252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:31.501 [2024-11-18 00:51:05.855704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:31.501 [2024-11-18 00:51:05.855766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:31.501 [2024-11-18 00:51:05.870252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:31.501 [2024-11-18 00:51:05.870351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:31.502 [2024-11-18 00:51:05.870495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:31.502 [2024-11-18 00:51:05.870529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:31.761 passed 00:08:31.761 Test: blob_persist_test ...passed 00:08:31.761 Test: blob_decouple_snapshot ...passed 00:08:31.761 Test: blob_seek_io_unit ...passed 00:08:32.020 Test: blob_nested_freezes ...passed 00:08:32.020 Suite: blob_blob_nocopy_noextent 00:08:32.020 Test: blob_write ...passed 00:08:32.020 Test: blob_read ...passed 00:08:32.020 Test: blob_rw_verify ...passed 00:08:32.279 Test: blob_rw_verify_iov_nomem ...passed 00:08:32.279 Test: blob_rw_iov_read_only ...passed 00:08:32.279 Test: blob_xattr ...passed 00:08:32.279 Test: blob_dirty_shutdown ...passed 00:08:32.279 Test: blob_is_degraded ...passed 00:08:32.279 Suite: blob_esnap_bs_nocopy_noextent 00:08:32.538 Test: blob_esnap_create ...passed 00:08:32.538 Test: blob_esnap_thread_add_remove ...passed 00:08:32.538 Test: blob_esnap_clone_snapshot ...passed 00:08:32.538 Test: blob_esnap_clone_inflate ...passed 00:08:32.797 Test: blob_esnap_clone_decouple ...passed 00:08:32.797 Test: blob_esnap_clone_reload ...passed 00:08:32.797 Test: blob_esnap_hotplug ...passed 00:08:32.797 Suite: blob_nocopy_extent 00:08:32.797 Test: blob_init ...[2024-11-18 00:51:07.076962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:32.797 passed 00:08:32.797 Test: blob_thin_provision ...passed 00:08:32.797 Test: blob_read_only ...passed 00:08:32.797 Test: bs_load ...[2024-11-18 00:51:07.158230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:32.797 passed 00:08:32.797 Test: bs_load_custom_cluster_size ...passed 00:08:33.055 Test: bs_load_after_failed_grow ...passed 00:08:33.055 Test: bs_cluster_sz ...[2024-11-18 00:51:07.201096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:33.055 [2024-11-18 00:51:07.201409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:33.055 [2024-11-18 00:51:07.201460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:33.055 passed 00:08:33.055 Test: bs_resize_md ...passed 00:08:33.055 Test: bs_destroy ...passed 00:08:33.055 Test: bs_type ...passed 00:08:33.056 Test: bs_super_block ...passed 00:08:33.056 Test: bs_test_recover_cluster_count ...passed 00:08:33.056 Test: bs_grow_live ...passed 00:08:33.056 Test: bs_grow_live_no_space ...passed 00:08:33.056 Test: bs_test_grow ...passed 00:08:33.056 Test: blob_serialize_test ...passed 00:08:33.056 Test: super_block_crc ...passed 00:08:33.056 Test: blob_thin_prov_write_count_io ...passed 00:08:33.056 Test: bs_load_iter_test ...passed 00:08:33.056 Test: blob_relations ...[2024-11-18 00:51:07.451644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:33.056 [2024-11-18 00:51:07.451778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.056 [2024-11-18 00:51:07.452696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:33.056 [2024-11-18 00:51:07.452771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.313 passed 00:08:33.313 Test: blob_relations2 ...[2024-11-18 00:51:07.477402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:33.313 [2024-11-18 00:51:07.477527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.313 [2024-11-18 00:51:07.477557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:33.313 [2024-11-18 00:51:07.477591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.313 [2024-11-18 00:51:07.479013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:33.313 [2024-11-18 00:51:07.479071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.313 [2024-11-18 00:51:07.479454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:33.313 [2024-11-18 00:51:07.479502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.313 passed 00:08:33.313 Test: blob_relations3 ...passed 00:08:33.572 Test: blobstore_clean_power_failure ...passed 00:08:33.572 Test: blob_delete_snapshot_power_failure ...[2024-11-18 00:51:07.754921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:33.572 [2024-11-18 00:51:07.775306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:33.572 [2024-11-18 00:51:07.795745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:33.572 [2024-11-18 00:51:07.795856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:33.572 [2024-11-18 00:51:07.795890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.572 [2024-11-18 00:51:07.816299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:33.572 [2024-11-18 00:51:07.816408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:33.572 [2024-11-18 00:51:07.816448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:33.572 [2024-11-18 00:51:07.816480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.572 [2024-11-18 00:51:07.837223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:33.572 [2024-11-18 00:51:07.837331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:33.572 [2024-11-18 00:51:07.837363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:33.572 [2024-11-18 00:51:07.837416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.572 [2024-11-18 00:51:07.857932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:33.572 [2024-11-18 00:51:07.858077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.572 [2024-11-18 00:51:07.878459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:33.572 [2024-11-18 00:51:07.878597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.572 [2024-11-18 00:51:07.899074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:33.572 [2024-11-18 00:51:07.899202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:33.572 passed 00:08:33.572 Test: blob_create_snapshot_power_failure ...[2024-11-18 00:51:07.959736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:33.832 [2024-11-18 00:51:07.980349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:33.832 [2024-11-18 00:51:08.021153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:33.832 [2024-11-18 00:51:08.041863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:33.832 passed 00:08:33.832 Test: blob_io_unit ...passed 00:08:33.832 Test: blob_io_unit_compatibility ...passed 00:08:33.832 Test: blob_ext_md_pages ...passed 00:08:33.832 Test: blob_esnap_io_4096_4096 ...passed 00:08:34.091 Test: blob_esnap_io_512_512 ...passed 00:08:34.091 Test: blob_esnap_io_4096_512 ...passed 00:08:34.091 Test: blob_esnap_io_512_4096 ...passed 00:08:34.091 Suite: blob_bs_nocopy_extent 00:08:34.091 Test: blob_open ...passed 00:08:34.091 Test: blob_create ...[2024-11-18 00:51:08.454048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:34.091 passed 00:08:34.350 Test: blob_create_loop ...passed 00:08:34.350 Test: blob_create_fail ...[2024-11-18 00:51:08.608183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:34.350 passed 00:08:34.350 Test: blob_create_internal ...passed 00:08:34.350 Test: blob_create_zero_extent ...passed 00:08:34.608 Test: blob_snapshot ...passed 00:08:34.608 Test: blob_clone ...passed 00:08:34.608 Test: blob_inflate ...[2024-11-18 00:51:08.928643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:34.608 passed 00:08:34.608 Test: blob_delete ...passed 00:08:34.866 Test: blob_resize_test ...[2024-11-18 00:51:09.044290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:34.866 passed 00:08:34.866 Test: channel_ops ...passed 00:08:34.866 Test: blob_super ...passed 00:08:34.866 Test: blob_rw_verify_iov ...passed 00:08:34.866 Test: blob_unmap ...passed 00:08:34.866 Test: blob_iter ...passed 00:08:35.134 Test: blob_parse_md ...passed 00:08:35.134 Test: bs_load_pending_removal ...passed 00:08:35.134 Test: bs_unload ...[2024-11-18 00:51:09.343439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:35.134 passed 00:08:35.134 Test: bs_usable_clusters ...passed 00:08:35.134 Test: blob_crc ...[2024-11-18 00:51:09.410557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:35.134 [2024-11-18 00:51:09.410673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:35.134 passed 00:08:35.134 Test: blob_flags ...passed 00:08:35.134 Test: bs_version ...passed 00:08:35.134 Test: blob_set_xattrs_test ...[2024-11-18 00:51:09.514204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:35.134 [2024-11-18 00:51:09.514543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:35.134 passed 00:08:35.407 Test: blob_thin_prov_alloc ...passed 00:08:35.407 Test: blob_insert_cluster_msg_test ...passed 00:08:35.407 Test: blob_thin_prov_rw ...passed 00:08:35.407 Test: blob_thin_prov_rle ...passed 00:08:35.407 Test: blob_thin_prov_rw_iov ...passed 00:08:35.666 Test: blob_snapshot_rw ...passed 00:08:35.666 Test: blob_snapshot_rw_iov ...passed 00:08:35.928 Test: blob_inflate_rw ...passed 00:08:35.928 Test: blob_snapshot_freeze_io ...passed 00:08:35.928 Test: blob_operation_split_rw ...passed 00:08:36.193 Test: blob_operation_split_rw_iov ...passed 00:08:36.452 Test: blob_simultaneous_operations ...[2024-11-18 00:51:10.606208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.452 [2024-11-18 00:51:10.606554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.452 [2024-11-18 00:51:10.608421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.452 [2024-11-18 00:51:10.608639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.452 [2024-11-18 00:51:10.627468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.452 [2024-11-18 00:51:10.627856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.452 [2024-11-18 00:51:10.628084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.452 [2024-11-18 00:51:10.628258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.452 passed 00:08:36.452 Test: blob_persist_test ...passed 00:08:36.452 Test: blob_decouple_snapshot ...passed 00:08:36.710 Test: blob_seek_io_unit ...passed 00:08:36.710 Test: blob_nested_freezes ...passed 00:08:36.710 Suite: blob_blob_nocopy_extent 00:08:36.710 Test: blob_write ...passed 00:08:36.710 Test: blob_read ...passed 00:08:36.968 Test: blob_rw_verify ...passed 00:08:36.969 Test: blob_rw_verify_iov_nomem ...passed 00:08:36.969 Test: blob_rw_iov_read_only ...passed 00:08:36.969 Test: blob_xattr ...passed 00:08:37.227 Test: blob_dirty_shutdown ...passed 00:08:37.227 Test: blob_is_degraded ...passed 00:08:37.227 Suite: blob_esnap_bs_nocopy_extent 00:08:37.227 Test: blob_esnap_create ...passed 00:08:37.227 Test: blob_esnap_thread_add_remove ...passed 00:08:37.485 Test: blob_esnap_clone_snapshot ...passed 00:08:37.485 Test: blob_esnap_clone_inflate ...passed 00:08:37.485 Test: blob_esnap_clone_decouple ...passed 00:08:37.485 Test: blob_esnap_clone_reload ...passed 00:08:37.743 Test: blob_esnap_hotplug ...passed 00:08:37.743 Suite: blob_copy_noextent 00:08:37.743 Test: blob_init ...[2024-11-18 00:51:11.899538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:37.743 passed 00:08:37.743 Test: blob_thin_provision ...passed 00:08:37.743 Test: blob_read_only ...passed 00:08:37.743 Test: bs_load ...[2024-11-18 00:51:11.979871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:37.743 passed 00:08:37.743 Test: bs_load_custom_cluster_size ...passed 00:08:37.743 Test: bs_load_after_failed_grow ...passed 00:08:37.743 Test: bs_cluster_sz ...[2024-11-18 00:51:12.022038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:37.743 [2024-11-18 00:51:12.022307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:37.743 [2024-11-18 00:51:12.022563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:37.743 passed 00:08:37.743 Test: bs_resize_md ...passed 00:08:37.743 Test: bs_destroy ...passed 00:08:37.743 Test: bs_type ...passed 00:08:37.743 Test: bs_super_block ...passed 00:08:37.743 Test: bs_test_recover_cluster_count ...passed 00:08:38.002 Test: bs_grow_live ...passed 00:08:38.002 Test: bs_grow_live_no_space ...passed 00:08:38.002 Test: bs_test_grow ...passed 00:08:38.002 Test: blob_serialize_test ...passed 00:08:38.002 Test: super_block_crc ...passed 00:08:38.002 Test: blob_thin_prov_write_count_io ...passed 00:08:38.002 Test: bs_load_iter_test ...passed 00:08:38.002 Test: blob_relations ...[2024-11-18 00:51:12.278836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:38.002 [2024-11-18 00:51:12.279178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.002 [2024-11-18 00:51:12.279809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:38.002 [2024-11-18 00:51:12.279938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.002 passed 00:08:38.002 Test: blob_relations2 ...[2024-11-18 00:51:12.303783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:38.002 [2024-11-18 00:51:12.304128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.002 [2024-11-18 00:51:12.304194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:38.002 [2024-11-18 00:51:12.304298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.002 [2024-11-18 00:51:12.305208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:38.002 [2024-11-18 00:51:12.305364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.002 [2024-11-18 00:51:12.305676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:38.002 [2024-11-18 00:51:12.305794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.002 passed 00:08:38.002 Test: blob_relations3 ...passed 00:08:38.261 Test: blobstore_clean_power_failure ...passed 00:08:38.261 Test: blob_delete_snapshot_power_failure ...[2024-11-18 00:51:12.594255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:38.261 [2024-11-18 00:51:12.614768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:38.261 [2024-11-18 00:51:12.615150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:38.261 [2024-11-18 00:51:12.615221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.261 [2024-11-18 00:51:12.635642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:38.261 [2024-11-18 00:51:12.635963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:38.261 [2024-11-18 00:51:12.636038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:38.261 [2024-11-18 00:51:12.636147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.261 [2024-11-18 00:51:12.656607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:38.261 [2024-11-18 00:51:12.656983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.520 [2024-11-18 00:51:12.677428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:38.520 [2024-11-18 00:51:12.677811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.520 [2024-11-18 00:51:12.698272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:38.520 [2024-11-18 00:51:12.698656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:38.520 passed 00:08:38.520 Test: blob_create_snapshot_power_failure ...[2024-11-18 00:51:12.759786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:38.520 [2024-11-18 00:51:12.800523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:38.520 [2024-11-18 00:51:12.821188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:38.520 passed 00:08:38.520 Test: blob_io_unit ...passed 00:08:38.778 Test: blob_io_unit_compatibility ...passed 00:08:38.778 Test: blob_ext_md_pages ...passed 00:08:38.778 Test: blob_esnap_io_4096_4096 ...passed 00:08:38.778 Test: blob_esnap_io_512_512 ...passed 00:08:38.778 Test: blob_esnap_io_4096_512 ...passed 00:08:38.778 Test: blob_esnap_io_512_4096 ...passed 00:08:38.778 Suite: blob_bs_copy_noextent 00:08:39.037 Test: blob_open ...passed 00:08:39.037 Test: blob_create ...[2024-11-18 00:51:13.222082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:39.037 passed 00:08:39.037 Test: blob_create_loop ...passed 00:08:39.037 Test: blob_create_fail ...[2024-11-18 00:51:13.361943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:39.037 passed 00:08:39.296 Test: blob_create_internal ...passed 00:08:39.296 Test: blob_create_zero_extent ...passed 00:08:39.296 Test: blob_snapshot ...passed 00:08:39.296 Test: blob_clone ...passed 00:08:39.296 Test: blob_inflate ...[2024-11-18 00:51:13.667599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:39.296 passed 00:08:39.554 Test: blob_delete ...passed 00:08:39.554 Test: blob_resize_test ...[2024-11-18 00:51:13.784024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:39.554 passed 00:08:39.554 Test: channel_ops ...passed 00:08:39.554 Test: blob_super ...passed 00:08:39.813 Test: blob_rw_verify_iov ...passed 00:08:39.814 Test: blob_unmap ...passed 00:08:39.814 Test: blob_iter ...passed 00:08:39.814 Test: blob_parse_md ...passed 00:08:40.072 Test: bs_load_pending_removal ...passed 00:08:40.072 Test: bs_unload ...[2024-11-18 00:51:14.266496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:40.072 passed 00:08:40.072 Test: bs_usable_clusters ...passed 00:08:40.072 Test: blob_crc ...[2024-11-18 00:51:14.387848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:40.072 [2024-11-18 00:51:14.388249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:40.072 passed 00:08:40.072 Test: blob_flags ...passed 00:08:40.331 Test: bs_version ...passed 00:08:40.332 Test: blob_set_xattrs_test ...[2024-11-18 00:51:14.566281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:40.332 [2024-11-18 00:51:14.566592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:40.332 passed 00:08:40.591 Test: blob_thin_prov_alloc ...passed 00:08:40.591 Test: blob_insert_cluster_msg_test ...passed 00:08:40.591 Test: blob_thin_prov_rw ...passed 00:08:40.591 Test: blob_thin_prov_rle ...passed 00:08:40.591 Test: blob_thin_prov_rw_iov ...passed 00:08:40.850 Test: blob_snapshot_rw ...passed 00:08:40.850 Test: blob_snapshot_rw_iov ...passed 00:08:41.108 Test: blob_inflate_rw ...passed 00:08:41.108 Test: blob_snapshot_freeze_io ...passed 00:08:41.367 Test: blob_operation_split_rw ...passed 00:08:41.627 Test: blob_operation_split_rw_iov ...passed 00:08:41.627 Test: blob_simultaneous_operations ...[2024-11-18 00:51:15.822681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:41.627 [2024-11-18 00:51:15.823418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.627 [2024-11-18 00:51:15.824062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:41.627 [2024-11-18 00:51:15.824214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.627 [2024-11-18 00:51:15.827639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:41.627 [2024-11-18 00:51:15.827822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.627 [2024-11-18 00:51:15.827960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:41.627 [2024-11-18 00:51:15.828076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.627 passed 00:08:41.627 Test: blob_persist_test ...passed 00:08:41.627 Test: blob_decouple_snapshot ...passed 00:08:41.886 Test: blob_seek_io_unit ...passed 00:08:41.886 Test: blob_nested_freezes ...passed 00:08:41.886 Suite: blob_blob_copy_noextent 00:08:41.886 Test: blob_write ...passed 00:08:41.886 Test: blob_read ...passed 00:08:42.146 Test: blob_rw_verify ...passed 00:08:42.146 Test: blob_rw_verify_iov_nomem ...passed 00:08:42.146 Test: blob_rw_iov_read_only ...passed 00:08:42.146 Test: blob_xattr ...passed 00:08:42.146 Test: blob_dirty_shutdown ...passed 00:08:42.404 Test: blob_is_degraded ...passed 00:08:42.404 Suite: blob_esnap_bs_copy_noextent 00:08:42.404 Test: blob_esnap_create ...passed 00:08:42.404 Test: blob_esnap_thread_add_remove ...passed 00:08:42.404 Test: blob_esnap_clone_snapshot ...passed 00:08:42.664 Test: blob_esnap_clone_inflate ...passed 00:08:42.664 Test: blob_esnap_clone_decouple ...passed 00:08:42.664 Test: blob_esnap_clone_reload ...passed 00:08:42.664 Test: blob_esnap_hotplug ...passed 00:08:42.664 Suite: blob_copy_extent 00:08:42.664 Test: blob_init ...[2024-11-18 00:51:17.015757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:42.664 passed 00:08:42.664 Test: blob_thin_provision ...passed 00:08:42.971 Test: blob_read_only ...passed 00:08:42.971 Test: bs_load ...[2024-11-18 00:51:17.096791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:42.971 passed 00:08:42.971 Test: bs_load_custom_cluster_size ...passed 00:08:42.971 Test: bs_load_after_failed_grow ...passed 00:08:42.971 Test: bs_cluster_sz ...[2024-11-18 00:51:17.139312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:42.971 [2024-11-18 00:51:17.139560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:42.971 [2024-11-18 00:51:17.139816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:42.971 passed 00:08:42.971 Test: bs_resize_md ...passed 00:08:42.971 Test: bs_destroy ...passed 00:08:42.971 Test: bs_type ...passed 00:08:42.971 Test: bs_super_block ...passed 00:08:42.971 Test: bs_test_recover_cluster_count ...passed 00:08:42.971 Test: bs_grow_live ...passed 00:08:42.971 Test: bs_grow_live_no_space ...passed 00:08:42.971 Test: bs_test_grow ...passed 00:08:42.971 Test: blob_serialize_test ...passed 00:08:42.971 Test: super_block_crc ...passed 00:08:42.971 Test: blob_thin_prov_write_count_io ...passed 00:08:43.229 Test: bs_load_iter_test ...passed 00:08:43.229 Test: blob_relations ...[2024-11-18 00:51:17.398381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:43.229 [2024-11-18 00:51:17.398717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.229 [2024-11-18 00:51:17.399687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:43.229 [2024-11-18 00:51:17.399846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.229 passed 00:08:43.229 Test: blob_relations2 ...[2024-11-18 00:51:17.424497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:43.229 [2024-11-18 00:51:17.424852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.229 [2024-11-18 00:51:17.424938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:43.229 [2024-11-18 00:51:17.425039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.229 [2024-11-18 00:51:17.426376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:43.229 [2024-11-18 00:51:17.426539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.229 [2024-11-18 00:51:17.427000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:43.229 [2024-11-18 00:51:17.427152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.229 passed 00:08:43.229 Test: blob_relations3 ...passed 00:08:43.488 Test: blobstore_clean_power_failure ...passed 00:08:43.488 Test: blob_delete_snapshot_power_failure ...[2024-11-18 00:51:17.703890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:43.488 [2024-11-18 00:51:17.724863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:43.488 [2024-11-18 00:51:17.746328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:43.488 [2024-11-18 00:51:17.746689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:43.488 [2024-11-18 00:51:17.746762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.488 [2024-11-18 00:51:17.770969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:43.488 [2024-11-18 00:51:17.771348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:43.488 [2024-11-18 00:51:17.771408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:43.488 [2024-11-18 00:51:17.771506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.488 [2024-11-18 00:51:17.792039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:43.488 [2024-11-18 00:51:17.792421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:43.488 [2024-11-18 00:51:17.792479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:43.488 [2024-11-18 00:51:17.792576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.488 [2024-11-18 00:51:17.812933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:43.488 [2024-11-18 00:51:17.813333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.488 [2024-11-18 00:51:17.833558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:43.488 [2024-11-18 00:51:17.833954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.488 [2024-11-18 00:51:17.854441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:43.488 [2024-11-18 00:51:17.854816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.747 passed 00:08:43.747 Test: blob_create_snapshot_power_failure ...[2024-11-18 00:51:17.916875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:43.747 [2024-11-18 00:51:17.938005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:43.747 [2024-11-18 00:51:17.979784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:43.747 [2024-11-18 00:51:18.000931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:43.747 passed 00:08:43.747 Test: blob_io_unit ...passed 00:08:43.747 Test: blob_io_unit_compatibility ...passed 00:08:43.747 Test: blob_ext_md_pages ...passed 00:08:44.005 Test: blob_esnap_io_4096_4096 ...passed 00:08:44.005 Test: blob_esnap_io_512_512 ...passed 00:08:44.005 Test: blob_esnap_io_4096_512 ...passed 00:08:44.005 Test: blob_esnap_io_512_4096 ...passed 00:08:44.005 Suite: blob_bs_copy_extent 00:08:44.005 Test: blob_open ...passed 00:08:44.005 Test: blob_create ...[2024-11-18 00:51:18.397002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:44.264 passed 00:08:44.264 Test: blob_create_loop ...passed 00:08:44.264 Test: blob_create_fail ...[2024-11-18 00:51:18.542267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:44.264 passed 00:08:44.264 Test: blob_create_internal ...passed 00:08:44.524 Test: blob_create_zero_extent ...passed 00:08:44.524 Test: blob_snapshot ...passed 00:08:44.524 Test: blob_clone ...passed 00:08:44.524 Test: blob_inflate ...[2024-11-18 00:51:18.847932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:44.524 passed 00:08:44.784 Test: blob_delete ...passed 00:08:44.784 Test: blob_resize_test ...[2024-11-18 00:51:18.965134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:44.784 passed 00:08:44.784 Test: channel_ops ...passed 00:08:44.784 Test: blob_super ...passed 00:08:44.784 Test: blob_rw_verify_iov ...passed 00:08:45.041 Test: blob_unmap ...passed 00:08:45.041 Test: blob_iter ...passed 00:08:45.041 Test: blob_parse_md ...passed 00:08:45.041 Test: bs_load_pending_removal ...passed 00:08:45.041 Test: bs_unload ...[2024-11-18 00:51:19.434466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:45.298 passed 00:08:45.298 Test: bs_usable_clusters ...passed 00:08:45.298 Test: blob_crc ...[2024-11-18 00:51:19.550749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:45.298 [2024-11-18 00:51:19.551151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:45.298 passed 00:08:45.298 Test: blob_flags ...passed 00:08:45.299 Test: bs_version ...passed 00:08:45.558 Test: blob_set_xattrs_test ...[2024-11-18 00:51:19.725689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:45.558 [2024-11-18 00:51:19.726002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:45.558 passed 00:08:45.558 Test: blob_thin_prov_alloc ...passed 00:08:45.558 Test: blob_insert_cluster_msg_test ...passed 00:08:45.818 Test: blob_thin_prov_rw ...passed 00:08:45.818 Test: blob_thin_prov_rle ...passed 00:08:45.818 Test: blob_thin_prov_rw_iov ...passed 00:08:45.818 Test: blob_snapshot_rw ...passed 00:08:46.077 Test: blob_snapshot_rw_iov ...passed 00:08:46.336 Test: blob_inflate_rw ...passed 00:08:46.336 Test: blob_snapshot_freeze_io ...passed 00:08:46.336 Test: blob_operation_split_rw ...passed 00:08:46.594 Test: blob_operation_split_rw_iov ...passed 00:08:46.594 Test: blob_simultaneous_operations ...[2024-11-18 00:51:20.927133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:46.595 [2024-11-18 00:51:20.927508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:46.595 [2024-11-18 00:51:20.928090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:46.595 [2024-11-18 00:51:20.928237] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:46.595 [2024-11-18 00:51:20.931713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:46.595 [2024-11-18 00:51:20.931879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:46.595 [2024-11-18 00:51:20.932029] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:46.595 [2024-11-18 00:51:20.932130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:46.595 passed 00:08:46.854 Test: blob_persist_test ...passed 00:08:46.854 Test: blob_decouple_snapshot ...passed 00:08:46.854 Test: blob_seek_io_unit ...passed 00:08:46.854 Test: blob_nested_freezes ...passed 00:08:46.854 Suite: blob_blob_copy_extent 00:08:47.113 Test: blob_write ...passed 00:08:47.113 Test: blob_read ...passed 00:08:47.113 Test: blob_rw_verify ...passed 00:08:47.113 Test: blob_rw_verify_iov_nomem ...passed 00:08:47.113 Test: blob_rw_iov_read_only ...passed 00:08:47.372 Test: blob_xattr ...passed 00:08:47.372 Test: blob_dirty_shutdown ...passed 00:08:47.372 Test: blob_is_degraded ...passed 00:08:47.372 Suite: blob_esnap_bs_copy_extent 00:08:47.372 Test: blob_esnap_create ...passed 00:08:47.631 Test: blob_esnap_thread_add_remove ...passed 00:08:47.631 Test: blob_esnap_clone_snapshot ...passed 00:08:47.631 Test: blob_esnap_clone_inflate ...passed 00:08:47.631 Test: blob_esnap_clone_decouple ...passed 00:08:47.890 Test: blob_esnap_clone_reload ...passed 00:08:47.890 Test: blob_esnap_hotplug ...passed 00:08:47.890 00:08:47.890 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.890 suites 16 16 n/a 0 0 00:08:47.890 tests 348 348 348 0 0 00:08:47.890 asserts 92605 92605 92605 0 n/a 00:08:47.890 00:08:47.890 Elapsed time = 20.204 seconds 00:08:47.890 00:51:22 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:47.890 00:08:47.890 00:08:47.890 CUnit - A unit testing framework for C - Version 2.1-3 00:08:47.890 http://cunit.sourceforge.net/ 00:08:47.890 00:08:47.890 00:08:47.890 Suite: blob_bdev 00:08:47.890 Test: create_bs_dev ...passed 00:08:47.890 Test: create_bs_dev_ro ...[2024-11-18 00:51:22.221827] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:47.890 passed 00:08:47.890 Test: create_bs_dev_rw ...passed 00:08:47.891 Test: claim_bs_dev ...[2024-11-18 00:51:22.222583] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:47.891 passed 00:08:47.891 Test: claim_bs_dev_ro ...passed 00:08:47.891 Test: deferred_destroy_refs ...passed 00:08:47.891 Test: deferred_destroy_channels ...passed 00:08:47.891 Test: deferred_destroy_threads ...passed 00:08:47.891 00:08:47.891 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.891 suites 1 1 n/a 0 0 00:08:47.891 tests 8 8 8 0 0 00:08:47.891 asserts 119 119 119 0 n/a 00:08:47.891 00:08:47.891 Elapsed time = 0.001 seconds 00:08:47.891 00:51:22 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:47.891 00:08:47.891 00:08:47.891 CUnit - A unit testing framework for C - Version 2.1-3 00:08:47.891 http://cunit.sourceforge.net/ 00:08:47.891 00:08:47.891 00:08:47.891 Suite: tree 00:08:47.891 Test: blobfs_tree_op_test ...passed 00:08:47.891 00:08:47.891 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.891 suites 1 1 n/a 0 0 00:08:47.891 tests 1 1 1 0 0 00:08:47.891 asserts 27 27 27 0 n/a 00:08:47.891 00:08:47.891 Elapsed time = 0.000 seconds 00:08:48.150 00:51:22 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:48.150 00:08:48.150 00:08:48.150 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.150 http://cunit.sourceforge.net/ 00:08:48.150 00:08:48.150 00:08:48.150 Suite: blobfs_async_ut 00:08:48.150 Test: fs_init ...passed 00:08:48.150 Test: fs_open ...passed 00:08:48.150 Test: fs_create ...passed 00:08:48.150 Test: fs_truncate ...passed 00:08:48.150 Test: fs_rename ...[2024-11-18 00:51:22.518263] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:48.150 passed 00:08:48.150 Test: fs_rw_async ...passed 00:08:48.410 Test: fs_writev_readv_async ...passed 00:08:48.410 Test: tree_find_buffer_ut ...passed 00:08:48.410 Test: channel_ops ...passed 00:08:48.410 Test: channel_ops_sync ...passed 00:08:48.410 00:08:48.410 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.410 suites 1 1 n/a 0 0 00:08:48.410 tests 10 10 10 0 0 00:08:48.410 asserts 292 292 292 0 n/a 00:08:48.410 00:08:48.410 Elapsed time = 0.278 seconds 00:08:48.410 00:51:22 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:48.410 00:08:48.410 00:08:48.410 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.410 http://cunit.sourceforge.net/ 00:08:48.410 00:08:48.410 00:08:48.410 Suite: blobfs_sync_ut 00:08:48.410 Test: cache_read_after_write ...[2024-11-18 00:51:22.802336] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:48.410 passed 00:08:48.669 Test: file_length ...passed 00:08:48.669 Test: append_write_to_extend_blob ...passed 00:08:48.669 Test: partial_buffer ...passed 00:08:48.669 Test: cache_write_null_buffer ...passed 00:08:48.669 Test: fs_create_sync ...passed 00:08:48.669 Test: fs_rename_sync ...passed 00:08:48.669 Test: cache_append_no_cache ...passed 00:08:48.669 Test: fs_delete_file_without_close ...passed 00:08:48.669 00:08:48.669 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.669 suites 1 1 n/a 0 0 00:08:48.669 tests 9 9 9 0 0 00:08:48.669 asserts 345 345 345 0 n/a 00:08:48.669 00:08:48.669 Elapsed time = 0.565 seconds 00:08:48.669 00:51:23 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:48.929 00:08:48.929 00:08:48.929 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.929 http://cunit.sourceforge.net/ 00:08:48.929 00:08:48.929 00:08:48.929 Suite: blobfs_bdev_ut 00:08:48.929 Test: spdk_blobfs_bdev_detect_test ...[2024-11-18 00:51:23.080781] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:48.929 passed 00:08:48.929 Test: spdk_blobfs_bdev_create_test ...[2024-11-18 00:51:23.081666] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:48.929 passed 00:08:48.929 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:48.929 00:08:48.930 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.930 suites 1 1 n/a 0 0 00:08:48.930 tests 3 3 3 0 0 00:08:48.930 asserts 9 9 9 0 n/a 00:08:48.930 00:08:48.930 Elapsed time = 0.001 seconds 00:08:48.930 ************************************ 00:08:48.930 END TEST unittest_blob_blobfs 00:08:48.930 ************************************ 00:08:48.930 00:08:48.930 real 0m21.305s 00:08:48.930 user 0m20.557s 00:08:48.930 sys 0m0.975s 00:08:48.930 00:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.930 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:08:48.930 00:51:23 -- unit/unittest.sh@208 -- # run_test unittest_event unittest_event 00:08:48.930 00:51:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.930 00:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.930 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:08:48.930 ************************************ 00:08:48.930 START TEST unittest_event 00:08:48.930 ************************************ 00:08:48.930 00:51:23 -- common/autotest_common.sh@1114 -- # unittest_event 00:08:48.930 00:51:23 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:48.930 00:08:48.930 00:08:48.930 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.930 http://cunit.sourceforge.net/ 00:08:48.930 00:08:48.930 00:08:48.930 Suite: app_suite 00:08:48.930 Test: test_spdk_app_parse_args ...app_ut: invalid option -- 'z' 00:08:48.930 app_ut [options] 00:08:48.930 options: 00:08:48.930 -c, --config JSON config file (default none) 00:08:48.930 --json JSON config file (default none) 00:08:48.930 --json-ignore-init-errors 00:08:48.930 don't exit on invalid config entry 00:08:48.930 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:48.930 -g, --single-file-segments 00:08:48.930 force creating just one hugetlbfs file 00:08:48.930 -h, --help show this usage 00:08:48.930 -i, --shm-id shared memory ID (optional) 00:08:48.930 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:48.930 --lcores lcore to CPU mapping list. The list is in the format: 00:08:48.930 [<,lcores[@CPUs]>...] 00:08:48.930 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:48.930 Within the group, '-' is used for range separator, 00:08:48.930 ',' is used for single number separator. 00:08:48.930 '( )' can be omitted for single element group, 00:08:48.930 '@' can be omitted if cpus and lcores have the same value 00:08:48.930 -n, --mem-channels channel number of memory channels used for DPDK 00:08:48.930 -p, --main-core main (primary) core for DPDK 00:08:48.930 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:48.930 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:48.930 --disable-cpumask-locks Disable CPU core lock files. 00:08:48.930 --silence-noticelog disable notice level logging to stderr 00:08:48.930 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:48.930 -u, --no-pci disable PCI access 00:08:48.930 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:48.930 --max-delay maximum reactor delay (in microseconds) 00:08:48.930 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:48.930 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:48.930 -R, --huge-unlink unlink huge files after initialization 00:08:48.930 -v, --version print SPDK version 00:08:48.930 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:48.930 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:48.930 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:48.930 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:48.930 Tracepoints vary in size and can use more than one trace entry. 00:08:48.930 --rpcs-allowed comma-separated list of permitted RPCS 00:08:48.930 --env-context Opaque context for use of the env implementation 00:08:48.930 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:48.930 --no-huge run without using hugepages 00:08:48.930 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:48.930 -e, --tpoint-group [:] 00:08:48.930 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:48.930 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:48.930 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:48.930 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:48.930 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:48.930 app_ut [options] 00:08:48.930 app_ut: unrecognized option '--test-long-opt' 00:08:48.930 options: 00:08:48.930 -c, --config JSON config file (default none) 00:08:48.930 --json JSON config file (default none) 00:08:48.930 --json-ignore-init-errors 00:08:48.930 don't exit on invalid config entry 00:08:48.930 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:48.930 -g, --single-file-segments 00:08:48.930 force creating just one hugetlbfs file 00:08:48.930 -h, --help show this usage 00:08:48.930 -i, --shm-id shared memory ID (optional) 00:08:48.930 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:48.930 --lcores lcore to CPU mapping list. The list is in the format: 00:08:48.930 [<,lcores[@CPUs]>...] 00:08:48.930 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:48.930 Within the group, '-' is used for range separator, 00:08:48.930 ',' is used for single number separator. 00:08:48.930 '( )' can be omitted for single element group, 00:08:48.930 '@' can be omitted if cpus and lcores have the same value 00:08:48.930 -n, --mem-channels channel number of memory channels used for DPDK 00:08:48.930 -p, --main-core main (primary) core for DPDK 00:08:48.930 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:48.930 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:48.930 --disable-cpumask-locks Disable CPU core lock files. 00:08:48.930 --silence-noticelog disable notice level logging to stderr 00:08:48.930 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:48.930 -u, --no-pci disable PCI access 00:08:48.930 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:48.930 --max-delay maximum reactor delay (in microseconds) 00:08:48.930 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:48.930 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:48.930 -R, --huge-unlink unlink huge files after initialization 00:08:48.930 -v, --version print SPDK version 00:08:48.930 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:48.930 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:48.930 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:48.930 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:48.930 Tracepoints vary in size and can use more than one trace entry. 00:08:48.930 --rpcs-allowed comma-separated list of permitted RPCS 00:08:48.930 --env-context Opaque context for use of the env implementation 00:08:48.930 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:48.930 --no-huge run without using hugepages 00:08:48.930 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:48.930 -e, --tpoint-group [:] 00:08:48.930 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:48.930 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:48.930 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:48.930 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:48.930 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:48.930 [2024-11-18 00:51:23.198746] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:48.930 [2024-11-18 00:51:23.199226] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:48.930 app_ut [options] 00:08:48.930 options: 00:08:48.930 -c, --config JSON config file (default none) 00:08:48.930 --json JSON config file (default none) 00:08:48.930 --json-ignore-init-errors 00:08:48.930 don't exit on invalid config entry 00:08:48.930 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:48.930 -g, --single-file-segments 00:08:48.930 force creating just one hugetlbfs file 00:08:48.930 -h, --help show this usage 00:08:48.930 -i, --shm-id shared memory ID (optional) 00:08:48.930 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:48.930 --lcores lcore to CPU mapping list. The list is in the format: 00:08:48.930 [<,lcores[@CPUs]>...] 00:08:48.930 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:48.931 Within the group, '-' is used for range separator, 00:08:48.931 ',' is used for single number separator. 00:08:48.931 '( )' can be omitted for single element group, 00:08:48.931 '@' can be omitted if cpus and lcores have the same value 00:08:48.931 -n, --mem-channels channel number of memory channels used for DPDK 00:08:48.931 -p, --main-core main (primary) core for DPDK 00:08:48.931 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:48.931 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:48.931 --disable-cpumask-locks Disable CPU core lock files. 00:08:48.931 --silence-noticelog disable notice level logging to stderr 00:08:48.931 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:48.931 -u, --no-pci disable PCI access 00:08:48.931 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:48.931 --max-delay maximum reactor delay (in microseconds) 00:08:48.931 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:48.931 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:48.931 -R, --huge-unlink unlink huge files after initialization 00:08:48.931 -v, --version print SPDK version 00:08:48.931 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:48.931 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:48.931 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:48.931 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:48.931 Tracepoints vary in size and can use more than one trace entry. 00:08:48.931 --rpcs-allowed comma-separated list of permitted RPCS 00:08:48.931 --env-context Opaque context for use of the env implementation 00:08:48.931 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:48.931 --no-huge run without using hugepages 00:08:48.931 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:48.931 -e, --tpoint-group [:] 00:08:48.931 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:48.931 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:48.931 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:48.931 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:48.931 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:48.931 [2024-11-18 00:51:23.201710] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:48.931 passed 00:08:48.931 00:08:48.931 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.931 suites 1 1 n/a 0 0 00:08:48.931 tests 1 1 1 0 0 00:08:48.931 asserts 8 8 8 0 n/a 00:08:48.931 00:08:48.931 Elapsed time = 0.002 seconds 00:08:48.931 00:51:23 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:48.931 00:08:48.931 00:08:48.931 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.931 http://cunit.sourceforge.net/ 00:08:48.931 00:08:48.931 00:08:48.931 Suite: app_suite 00:08:48.931 Test: test_create_reactor ...passed 00:08:48.931 Test: test_init_reactors ...passed 00:08:48.931 Test: test_event_call ...passed 00:08:48.931 Test: test_schedule_thread ...passed 00:08:48.931 Test: test_reschedule_thread ...passed 00:08:48.931 Test: test_bind_thread ...passed 00:08:48.931 Test: test_for_each_reactor ...passed 00:08:48.931 Test: test_reactor_stats ...passed 00:08:48.931 Test: test_scheduler ...passed 00:08:48.931 Test: test_governor ...passed 00:08:48.931 00:08:48.931 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.931 suites 1 1 n/a 0 0 00:08:48.931 tests 10 10 10 0 0 00:08:48.931 asserts 344 344 344 0 n/a 00:08:48.931 00:08:48.931 Elapsed time = 0.020 seconds 00:08:48.931 00:08:48.931 real 0m0.114s 00:08:48.931 user 0m0.063s 00:08:48.931 sys 0m0.042s 00:08:48.931 00:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.931 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:08:48.931 ************************************ 00:08:48.931 END TEST unittest_event 00:08:48.931 ************************************ 00:08:49.190 00:51:23 -- unit/unittest.sh@209 -- # uname -s 00:08:49.190 00:51:23 -- unit/unittest.sh@209 -- # '[' Linux = Linux ']' 00:08:49.190 00:51:23 -- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl 00:08:49.190 00:51:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.190 00:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.190 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:08:49.190 ************************************ 00:08:49.190 START TEST unittest_ftl 00:08:49.190 ************************************ 00:08:49.190 00:51:23 -- common/autotest_common.sh@1114 -- # unittest_ftl 00:08:49.190 00:51:23 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:49.190 00:08:49.190 00:08:49.190 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.190 http://cunit.sourceforge.net/ 00:08:49.190 00:08:49.190 00:08:49.190 Suite: ftl_band_suite 00:08:49.190 Test: test_band_block_offset_from_addr_base ...passed 00:08:49.190 Test: test_band_block_offset_from_addr_offset ...passed 00:08:49.190 Test: test_band_addr_from_block_offset ...passed 00:08:49.190 Test: test_band_set_addr ...passed 00:08:49.190 Test: test_invalidate_addr ...passed 00:08:49.190 Test: test_next_xfer_addr ...passed 00:08:49.190 00:08:49.190 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.190 suites 1 1 n/a 0 0 00:08:49.190 tests 6 6 6 0 0 00:08:49.190 asserts 30356 30356 30356 0 n/a 00:08:49.190 00:08:49.190 Elapsed time = 0.196 seconds 00:08:49.450 00:51:23 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:49.450 00:08:49.450 00:08:49.450 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.450 http://cunit.sourceforge.net/ 00:08:49.450 00:08:49.450 00:08:49.450 Suite: ftl_bitmap 00:08:49.450 Test: test_ftl_bitmap_create ...[2024-11-18 00:51:23.704014] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:49.450 [2024-11-18 00:51:23.704549] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:49.450 passed 00:08:49.450 Test: test_ftl_bitmap_get ...passed 00:08:49.450 Test: test_ftl_bitmap_set ...passed 00:08:49.450 Test: test_ftl_bitmap_clear ...passed 00:08:49.450 Test: test_ftl_bitmap_find_first_set ...passed 00:08:49.450 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:49.450 Test: test_ftl_bitmap_count_set ...passed 00:08:49.450 00:08:49.450 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.450 suites 1 1 n/a 0 0 00:08:49.450 tests 7 7 7 0 0 00:08:49.450 asserts 137 137 137 0 n/a 00:08:49.450 00:08:49.450 Elapsed time = 0.001 seconds 00:08:49.450 00:51:23 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:49.450 00:08:49.450 00:08:49.450 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.450 http://cunit.sourceforge.net/ 00:08:49.450 00:08:49.450 00:08:49.450 Suite: ftl_io_suite 00:08:49.450 Test: test_completion ...passed 00:08:49.450 Test: test_multiple_ios ...passed 00:08:49.450 00:08:49.450 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.450 suites 1 1 n/a 0 0 00:08:49.450 tests 2 2 2 0 0 00:08:49.450 asserts 47 47 47 0 n/a 00:08:49.450 00:08:49.450 Elapsed time = 0.003 seconds 00:08:49.450 00:51:23 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:49.450 00:08:49.450 00:08:49.450 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.450 http://cunit.sourceforge.net/ 00:08:49.450 00:08:49.450 00:08:49.450 Suite: ftl_mngt 00:08:49.450 Test: test_next_step ...passed 00:08:49.450 Test: test_continue_step ...passed 00:08:49.450 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:49.450 Test: test_fail_step ...passed 00:08:49.450 Test: test_mngt_call_and_call_rollback ...passed 00:08:49.450 Test: test_nested_process_failure ...passed 00:08:49.450 00:08:49.450 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.450 suites 1 1 n/a 0 0 00:08:49.450 tests 6 6 6 0 0 00:08:49.450 asserts 176 176 176 0 n/a 00:08:49.450 00:08:49.450 Elapsed time = 0.003 seconds 00:08:49.450 00:51:23 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:49.450 00:08:49.450 00:08:49.450 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.450 http://cunit.sourceforge.net/ 00:08:49.450 00:08:49.450 00:08:49.450 Suite: ftl_mempool 00:08:49.450 Test: test_ftl_mempool_create ...passed 00:08:49.450 Test: test_ftl_mempool_get_put ...passed 00:08:49.450 00:08:49.450 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.450 suites 1 1 n/a 0 0 00:08:49.450 tests 2 2 2 0 0 00:08:49.450 asserts 36 36 36 0 n/a 00:08:49.450 00:08:49.450 Elapsed time = 0.000 seconds 00:08:49.710 00:51:23 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:49.710 00:08:49.710 00:08:49.710 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.710 http://cunit.sourceforge.net/ 00:08:49.710 00:08:49.710 00:08:49.710 Suite: ftl_addr64_suite 00:08:49.710 Test: test_addr_cached ...passed 00:08:49.710 00:08:49.710 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.710 suites 1 1 n/a 0 0 00:08:49.710 tests 1 1 1 0 0 00:08:49.710 asserts 1536 1536 1536 0 n/a 00:08:49.710 00:08:49.710 Elapsed time = 0.000 seconds 00:08:49.710 00:51:23 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:49.710 00:08:49.710 00:08:49.710 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.710 http://cunit.sourceforge.net/ 00:08:49.710 00:08:49.710 00:08:49.710 Suite: ftl_sb 00:08:49.710 Test: test_sb_crc_v2 ...passed 00:08:49.710 Test: test_sb_crc_v3 ...passed 00:08:49.710 Test: test_sb_v3_md_layout ...[2024-11-18 00:51:23.911679] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:49.710 [2024-11-18 00:51:23.912856] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:49.710 [2024-11-18 00:51:23.913325] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:49.710 [2024-11-18 00:51:23.913656] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:49.710 [2024-11-18 00:51:23.913956] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:49.710 [2024-11-18 00:51:23.914354] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:49.710 [2024-11-18 00:51:23.914666] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:49.710 [2024-11-18 00:51:23.915019] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:49.710 [2024-11-18 00:51:23.915389] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:49.710 [2024-11-18 00:51:23.915689] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:49.710 [2024-11-18 00:51:23.916003] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:49.710 passed 00:08:49.710 Test: test_sb_v5_md_layout ...passed 00:08:49.710 00:08:49.710 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.710 suites 1 1 n/a 0 0 00:08:49.710 tests 4 4 4 0 0 00:08:49.710 asserts 148 148 148 0 n/a 00:08:49.710 00:08:49.710 Elapsed time = 0.003 seconds 00:08:49.710 00:51:23 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:49.710 00:08:49.710 00:08:49.710 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.710 http://cunit.sourceforge.net/ 00:08:49.710 00:08:49.710 00:08:49.710 Suite: ftl_layout_upgrade 00:08:49.710 Test: test_l2p_upgrade ...passed 00:08:49.710 00:08:49.710 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.710 suites 1 1 n/a 0 0 00:08:49.710 tests 1 1 1 0 0 00:08:49.710 asserts 140 140 140 0 n/a 00:08:49.710 00:08:49.710 Elapsed time = 0.001 seconds 00:08:49.710 00:08:49.710 real 0m0.627s 00:08:49.710 user 0m0.281s 00:08:49.710 sys 0m0.336s 00:08:49.710 00:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.710 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:08:49.710 ************************************ 00:08:49.710 END TEST unittest_ftl 00:08:49.710 ************************************ 00:08:49.710 00:51:24 -- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:49.710 00:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.710 00:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.710 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:49.710 ************************************ 00:08:49.710 START TEST unittest_accel 00:08:49.710 ************************************ 00:08:49.710 00:51:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:49.710 00:08:49.710 00:08:49.710 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.710 http://cunit.sourceforge.net/ 00:08:49.710 00:08:49.710 00:08:49.710 Suite: accel_sequence 00:08:49.710 Test: test_sequence_fill_copy ...passed 00:08:49.710 Test: test_sequence_abort ...passed 00:08:49.710 Test: test_sequence_append_error ...passed 00:08:49.710 Test: test_sequence_completion_error ...[2024-11-18 00:51:24.103566] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f711ba227c0 00:08:49.710 [2024-11-18 00:51:24.104067] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f711ba227c0 00:08:49.710 [2024-11-18 00:51:24.104143] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f711ba227c0 00:08:49.710 [2024-11-18 00:51:24.104210] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f711ba227c0 00:08:49.710 passed 00:08:49.710 Test: test_sequence_decompress ...passed 00:08:49.710 Test: test_sequence_reverse ...passed 00:08:49.970 Test: test_sequence_copy_elision ...passed 00:08:49.970 Test: test_sequence_accel_buffers ...passed 00:08:49.970 Test: test_sequence_memory_domain ...[2024-11-18 00:51:24.119163] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:49.970 passed 00:08:49.970 Test: test_sequence_module_memory_domain ...[2024-11-18 00:51:24.119411] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:49.970 passed 00:08:49.970 Test: test_sequence_crypto ...passed 00:08:49.970 Test: test_sequence_driver ...[2024-11-18 00:51:24.128115] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f711aa237c0 using driver: ut 00:08:49.970 [2024-11-18 00:51:24.128280] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f711aa237c0 through driver: ut 00:08:49.970 passed 00:08:49.971 Test: test_sequence_same_iovs ...passed 00:08:49.971 Test: test_sequence_crc32 ...passed 00:08:49.971 Suite: accel 00:08:49.971 Test: test_spdk_accel_task_complete ...passed 00:08:49.971 Test: test_get_task ...passed 00:08:49.971 Test: test_spdk_accel_submit_copy ...passed 00:08:49.971 Test: test_spdk_accel_submit_dualcast ...[2024-11-18 00:51:24.134798] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:49.971 [2024-11-18 00:51:24.134905] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:49.971 passed 00:08:49.971 Test: test_spdk_accel_submit_compare ...passed 00:08:49.971 Test: test_spdk_accel_submit_fill ...passed 00:08:49.971 Test: test_spdk_accel_submit_crc32c ...passed 00:08:49.971 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:49.971 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:49.971 Test: test_spdk_accel_submit_xor ...passed 00:08:49.971 Test: test_spdk_accel_module_find_by_name ...passed 00:08:49.971 Test: test_spdk_accel_module_register ...passed 00:08:49.971 00:08:49.971 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.971 suites 2 2 n/a 0 0 00:08:49.971 tests 26 26 26 0 0 00:08:49.971 asserts 831 831 831 0 n/a 00:08:49.971 00:08:49.971 Elapsed time = 0.046 seconds 00:08:49.971 00:08:49.971 real 0m0.107s 00:08:49.971 user 0m0.045s 00:08:49.971 sys 0m0.062s 00:08:49.971 00:51:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.971 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:49.971 ************************************ 00:08:49.971 END TEST unittest_accel 00:08:49.971 ************************************ 00:08:49.971 00:51:24 -- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:49.971 00:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.971 00:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.971 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:49.971 ************************************ 00:08:49.971 START TEST unittest_ioat 00:08:49.971 ************************************ 00:08:49.971 00:51:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:49.971 00:08:49.971 00:08:49.971 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.971 http://cunit.sourceforge.net/ 00:08:49.971 00:08:49.971 00:08:49.971 Suite: ioat 00:08:49.971 Test: ioat_state_check ...passed 00:08:49.971 00:08:49.971 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.971 suites 1 1 n/a 0 0 00:08:49.971 tests 1 1 1 0 0 00:08:49.971 asserts 32 32 32 0 n/a 00:08:49.971 00:08:49.971 Elapsed time = 0.000 seconds 00:08:49.971 00:08:49.971 real 0m0.039s 00:08:49.971 user 0m0.027s 00:08:49.971 sys 0m0.012s 00:08:49.971 00:51:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.971 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:49.971 ************************************ 00:08:49.971 END TEST unittest_ioat 00:08:49.971 ************************************ 00:08:49.971 00:51:24 -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:49.971 00:51:24 -- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:49.971 00:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.971 00:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.971 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:49.971 ************************************ 00:08:49.971 START TEST unittest_idxd_user 00:08:49.971 ************************************ 00:08:49.971 00:51:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:50.230 00:08:50.230 00:08:50.230 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.230 http://cunit.sourceforge.net/ 00:08:50.230 00:08:50.230 00:08:50.230 Suite: idxd_user 00:08:50.231 Test: test_idxd_wait_cmd ...[2024-11-18 00:51:24.379680] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:50.231 [2024-11-18 00:51:24.380028] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:50.231 passed 00:08:50.231 Test: test_idxd_reset_dev ...passed 00:08:50.231 Test: test_idxd_group_config ...passed 00:08:50.231 Test: test_idxd_wq_config ...[2024-11-18 00:51:24.380197] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:50.231 [2024-11-18 00:51:24.380252] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:50.231 passed 00:08:50.231 00:08:50.231 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.231 suites 1 1 n/a 0 0 00:08:50.231 tests 4 4 4 0 0 00:08:50.231 asserts 20 20 20 0 n/a 00:08:50.231 00:08:50.231 Elapsed time = 0.001 seconds 00:08:50.231 00:08:50.231 real 0m0.042s 00:08:50.231 user 0m0.018s 00:08:50.231 sys 0m0.024s 00:08:50.231 00:51:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.231 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.231 ************************************ 00:08:50.231 END TEST unittest_idxd_user 00:08:50.231 ************************************ 00:08:50.231 00:51:24 -- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi 00:08:50.231 00:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.231 00:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.231 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.231 ************************************ 00:08:50.231 START TEST unittest_iscsi 00:08:50.231 ************************************ 00:08:50.231 00:51:24 -- common/autotest_common.sh@1114 -- # unittest_iscsi 00:08:50.231 00:51:24 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:50.231 00:08:50.231 00:08:50.231 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.231 http://cunit.sourceforge.net/ 00:08:50.231 00:08:50.231 00:08:50.231 Suite: conn_suite 00:08:50.231 Test: read_task_split_in_order_case ...passed 00:08:50.231 Test: read_task_split_reverse_order_case ...passed 00:08:50.231 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:50.231 Test: process_non_read_task_completion_test ...passed 00:08:50.231 Test: free_tasks_on_connection ...passed 00:08:50.231 Test: free_tasks_with_queued_datain ...passed 00:08:50.231 Test: abort_queued_datain_task_test ...passed 00:08:50.231 Test: abort_queued_datain_tasks_test ...passed 00:08:50.231 00:08:50.231 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.231 suites 1 1 n/a 0 0 00:08:50.231 tests 8 8 8 0 0 00:08:50.231 asserts 230 230 230 0 n/a 00:08:50.231 00:08:50.231 Elapsed time = 0.000 seconds 00:08:50.231 00:51:24 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:50.231 00:08:50.231 00:08:50.231 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.231 http://cunit.sourceforge.net/ 00:08:50.231 00:08:50.231 00:08:50.231 Suite: iscsi_suite 00:08:50.231 Test: param_negotiation_test ...passed 00:08:50.231 Test: list_negotiation_test ...passed 00:08:50.231 Test: parse_valid_test ...passed 00:08:50.231 Test: parse_invalid_test ...[2024-11-18 00:51:24.541941] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:50.231 [2024-11-18 00:51:24.542556] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:50.231 [2024-11-18 00:51:24.542634] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:08:50.231 [2024-11-18 00:51:24.542728] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:50.231 [2024-11-18 00:51:24.542944] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:50.231 [2024-11-18 00:51:24.543020] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:50.231 [2024-11-18 00:51:24.543212] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:50.231 passed 00:08:50.231 00:08:50.231 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.231 suites 1 1 n/a 0 0 00:08:50.231 tests 4 4 4 0 0 00:08:50.231 asserts 161 161 161 0 n/a 00:08:50.231 00:08:50.231 Elapsed time = 0.006 seconds 00:08:50.231 00:51:24 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:50.231 00:08:50.231 00:08:50.231 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.231 http://cunit.sourceforge.net/ 00:08:50.231 00:08:50.231 00:08:50.231 Suite: iscsi_target_node_suite 00:08:50.231 Test: add_lun_test_cases ...[2024-11-18 00:51:24.586177] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:50.231 [2024-11-18 00:51:24.587053] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:50.231 [2024-11-18 00:51:24.587221] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:50.231 [2024-11-18 00:51:24.587600] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:50.231 [2024-11-18 00:51:24.587656] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:50.231 passed 00:08:50.231 Test: allow_any_allowed ...passed 00:08:50.231 Test: allow_ipv6_allowed ...passed 00:08:50.231 Test: allow_ipv6_denied ...passed 00:08:50.231 Test: allow_ipv6_invalid ...passed 00:08:50.231 Test: allow_ipv4_allowed ...passed 00:08:50.231 Test: allow_ipv4_denied ...passed 00:08:50.231 Test: allow_ipv4_invalid ...passed 00:08:50.231 Test: node_access_allowed ...passed 00:08:50.231 Test: node_access_denied_by_empty_netmask ...passed 00:08:50.231 Test: node_access_multi_initiator_groups_cases ...passed 00:08:50.231 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:50.231 Test: chap_param_test_cases ...[2024-11-18 00:51:24.589132] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:50.231 [2024-11-18 00:51:24.589460] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:50.231 [2024-11-18 00:51:24.589571] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:50.231 [2024-11-18 00:51:24.589903] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:50.231 passed 00:08:50.231 00:08:50.231 [2024-11-18 00:51:24.589968] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:50.231 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.231 suites 1 1 n/a 0 0 00:08:50.231 tests 13 13 13 0 0 00:08:50.231 asserts 50 50 50 0 n/a 00:08:50.231 00:08:50.231 Elapsed time = 0.004 seconds 00:08:50.231 00:51:24 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:50.495 00:08:50.495 00:08:50.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.495 http://cunit.sourceforge.net/ 00:08:50.495 00:08:50.495 00:08:50.495 Suite: iscsi_suite 00:08:50.495 Test: op_login_check_target_test ...passed 00:08:50.495 Test: op_login_session_normal_test ...[2024-11-18 00:51:24.641262] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:08:50.495 [2024-11-18 00:51:24.641736] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:50.495 [2024-11-18 00:51:24.641811] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:50.495 [2024-11-18 00:51:24.641871] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:50.495 [2024-11-18 00:51:24.641947] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:50.495 [2024-11-18 00:51:24.642091] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:50.495 [2024-11-18 00:51:24.642234] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:50.495 [2024-11-18 00:51:24.642307] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:50.495 passed 00:08:50.495 Test: maxburstlength_test ...[2024-11-18 00:51:24.642718] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:50.495 [2024-11-18 00:51:24.642790] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:50.495 passed 00:08:50.495 Test: underflow_for_read_transfer_test ...passed 00:08:50.495 Test: underflow_for_zero_read_transfer_test ...passed 00:08:50.495 Test: underflow_for_request_sense_test ...passed 00:08:50.495 Test: underflow_for_check_condition_test ...passed 00:08:50.495 Test: add_transfer_task_test ...passed 00:08:50.495 Test: get_transfer_task_test ...passed 00:08:50.495 Test: del_transfer_task_test ...passed 00:08:50.495 Test: clear_all_transfer_tasks_test ...passed 00:08:50.495 Test: build_iovs_test ...passed 00:08:50.495 Test: build_iovs_with_md_test ...passed 00:08:50.495 Test: pdu_hdr_op_login_test ...[2024-11-18 00:51:24.644780] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:50.495 [2024-11-18 00:51:24.644916] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:50.495 [2024-11-18 00:51:24.645034] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:50.495 passed 00:08:50.495 Test: pdu_hdr_op_text_test ...[2024-11-18 00:51:24.645166] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:50.495 [2024-11-18 00:51:24.645284] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:50.495 passed 00:08:50.496 Test: pdu_hdr_op_logout_test ...[2024-11-18 00:51:24.645369] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:50.496 [2024-11-18 00:51:24.645484] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:50.496 passed 00:08:50.496 Test: pdu_hdr_op_scsi_test ...[2024-11-18 00:51:24.645703] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:50.496 [2024-11-18 00:51:24.645769] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:50.496 [2024-11-18 00:51:24.645842] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:50.496 [2024-11-18 00:51:24.645958] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:50.496 [2024-11-18 00:51:24.646067] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:50.496 [2024-11-18 00:51:24.646286] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:50.496 passed 00:08:50.496 Test: pdu_hdr_op_task_mgmt_test ...[2024-11-18 00:51:24.646427] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:50.496 [2024-11-18 00:51:24.646550] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:50.496 passed 00:08:50.496 Test: pdu_hdr_op_nopout_test ...[2024-11-18 00:51:24.646831] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:50.496 [2024-11-18 00:51:24.646957] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:50.496 [2024-11-18 00:51:24.647018] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:50.496 [2024-11-18 00:51:24.647082] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:50.496 passed 00:08:50.496 Test: pdu_hdr_op_data_test ...[2024-11-18 00:51:24.647145] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:50.496 [2024-11-18 00:51:24.647249] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:50.496 [2024-11-18 00:51:24.647336] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:50.496 [2024-11-18 00:51:24.647427] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:50.496 [2024-11-18 00:51:24.647492] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:50.496 [2024-11-18 00:51:24.647618] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:50.496 passed 00:08:50.496 Test: empty_text_with_cbit_test ...[2024-11-18 00:51:24.647689] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:50.496 passed 00:08:50.496 Test: pdu_payload_read_test ...[2024-11-18 00:51:24.649950] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:50.496 passed 00:08:50.496 Test: data_out_pdu_sequence_test ...passed 00:08:50.496 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:50.496 00:08:50.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.496 suites 1 1 n/a 0 0 00:08:50.496 tests 24 24 24 0 0 00:08:50.496 asserts 150253 150253 150253 0 n/a 00:08:50.496 00:08:50.496 Elapsed time = 0.019 seconds 00:08:50.496 00:51:24 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:50.496 00:08:50.496 00:08:50.496 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.496 http://cunit.sourceforge.net/ 00:08:50.496 00:08:50.496 00:08:50.496 Suite: init_grp_suite 00:08:50.496 Test: create_initiator_group_success_case ...passed 00:08:50.496 Test: find_initiator_group_success_case ...passed 00:08:50.496 Test: register_initiator_group_twice_case ...passed 00:08:50.496 Test: add_initiator_name_success_case ...passed 00:08:50.496 Test: add_initiator_name_fail_case ...[2024-11-18 00:51:24.709054] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:50.496 passed 00:08:50.496 Test: delete_all_initiator_names_success_case ...passed 00:08:50.496 Test: add_netmask_success_case ...passed 00:08:50.496 Test: add_netmask_fail_case ...[2024-11-18 00:51:24.709638] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:50.496 passed 00:08:50.496 Test: delete_all_netmasks_success_case ...passed 00:08:50.496 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:50.496 Test: netmask_overwrite_all_to_any_case ...passed 00:08:50.496 Test: add_delete_initiator_names_case ...passed 00:08:50.496 Test: add_duplicated_initiator_names_case ...passed 00:08:50.496 Test: delete_nonexisting_initiator_names_case ...passed 00:08:50.496 Test: add_delete_netmasks_case ...passed 00:08:50.496 Test: add_duplicated_netmasks_case ...passed 00:08:50.496 Test: delete_nonexisting_netmasks_case ...passed 00:08:50.496 00:08:50.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.496 suites 1 1 n/a 0 0 00:08:50.496 tests 17 17 17 0 0 00:08:50.496 asserts 108 108 108 0 n/a 00:08:50.496 00:08:50.496 Elapsed time = 0.001 seconds 00:08:50.496 00:51:24 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:50.496 00:08:50.496 00:08:50.496 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.496 http://cunit.sourceforge.net/ 00:08:50.496 00:08:50.496 00:08:50.496 Suite: portal_grp_suite 00:08:50.496 Test: portal_create_ipv4_normal_case ...passed 00:08:50.496 Test: portal_create_ipv6_normal_case ...passed 00:08:50.496 Test: portal_create_ipv4_wildcard_case ...passed 00:08:50.496 Test: portal_create_ipv6_wildcard_case ...passed 00:08:50.496 Test: portal_create_twice_case ...[2024-11-18 00:51:24.759022] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:50.496 passed 00:08:50.496 Test: portal_grp_register_unregister_case ...passed 00:08:50.496 Test: portal_grp_register_twice_case ...passed 00:08:50.496 Test: portal_grp_add_delete_case ...passed 00:08:50.496 Test: portal_grp_add_delete_twice_case ...passed 00:08:50.496 00:08:50.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.496 suites 1 1 n/a 0 0 00:08:50.496 tests 9 9 9 0 0 00:08:50.496 asserts 44 44 44 0 n/a 00:08:50.496 00:08:50.496 Elapsed time = 0.005 seconds 00:08:50.496 00:08:50.496 real 0m0.316s 00:08:50.496 user 0m0.180s 00:08:50.496 ************************************ 00:08:50.496 END TEST unittest_iscsi 00:08:50.496 ************************************ 00:08:50.496 sys 0m0.137s 00:08:50.496 00:51:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.496 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.496 00:51:24 -- unit/unittest.sh@219 -- # run_test unittest_json unittest_json 00:08:50.496 00:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.496 00:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.496 00:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.496 ************************************ 00:08:50.496 START TEST unittest_json 00:08:50.496 ************************************ 00:08:50.496 00:51:24 -- common/autotest_common.sh@1114 -- # unittest_json 00:08:50.496 00:51:24 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:50.496 00:08:50.496 00:08:50.496 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.496 http://cunit.sourceforge.net/ 00:08:50.496 00:08:50.496 00:08:50.496 Suite: json 00:08:50.496 Test: test_parse_literal ...passed 00:08:50.496 Test: test_parse_string_simple ...passed 00:08:50.496 Test: test_parse_string_control_chars ...passed 00:08:50.496 Test: test_parse_string_utf8 ...passed 00:08:50.496 Test: test_parse_string_escapes_twochar ...passed 00:08:50.496 Test: test_parse_string_escapes_unicode ...passed 00:08:50.496 Test: test_parse_number ...passed 00:08:50.496 Test: test_parse_array ...passed 00:08:50.496 Test: test_parse_object ...passed 00:08:50.496 Test: test_parse_nesting ...passed 00:08:50.496 Test: test_parse_comment ...passed 00:08:50.496 00:08:50.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.496 suites 1 1 n/a 0 0 00:08:50.496 tests 11 11 11 0 0 00:08:50.496 asserts 1516 1516 1516 0 n/a 00:08:50.496 00:08:50.496 Elapsed time = 0.002 seconds 00:08:50.496 00:51:24 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:50.771 00:08:50.771 00:08:50.771 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.771 http://cunit.sourceforge.net/ 00:08:50.771 00:08:50.771 00:08:50.771 Suite: json 00:08:50.771 Test: test_strequal ...passed 00:08:50.771 Test: test_num_to_uint16 ...passed 00:08:50.771 Test: test_num_to_int32 ...passed 00:08:50.771 Test: test_num_to_uint64 ...passed 00:08:50.771 Test: test_decode_object ...passed 00:08:50.771 Test: test_decode_array ...passed 00:08:50.771 Test: test_decode_bool ...passed 00:08:50.771 Test: test_decode_uint16 ...passed 00:08:50.771 Test: test_decode_int32 ...passed 00:08:50.771 Test: test_decode_uint32 ...passed 00:08:50.771 Test: test_decode_uint64 ...passed 00:08:50.771 Test: test_decode_string ...passed 00:08:50.771 Test: test_decode_uuid ...passed 00:08:50.771 Test: test_find ...passed 00:08:50.771 Test: test_find_array ...passed 00:08:50.771 Test: test_iterating ...passed 00:08:50.771 Test: test_free_object ...passed 00:08:50.771 00:08:50.771 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.771 suites 1 1 n/a 0 0 00:08:50.771 tests 17 17 17 0 0 00:08:50.771 asserts 236 236 236 0 n/a 00:08:50.771 00:08:50.771 Elapsed time = 0.001 seconds 00:08:50.771 00:51:24 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:50.771 00:08:50.771 00:08:50.771 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.771 http://cunit.sourceforge.net/ 00:08:50.771 00:08:50.771 00:08:50.771 Suite: json 00:08:50.771 Test: test_write_literal ...passed 00:08:50.771 Test: test_write_string_simple ...passed 00:08:50.771 Test: test_write_string_escapes ...passed 00:08:50.771 Test: test_write_string_utf16le ...passed 00:08:50.771 Test: test_write_number_int32 ...passed 00:08:50.771 Test: test_write_number_uint32 ...passed 00:08:50.771 Test: test_write_number_uint128 ...passed 00:08:50.771 Test: test_write_string_number_uint128 ...passed 00:08:50.771 Test: test_write_number_int64 ...passed 00:08:50.771 Test: test_write_number_uint64 ...passed 00:08:50.771 Test: test_write_number_double ...passed 00:08:50.771 Test: test_write_uuid ...passed 00:08:50.771 Test: test_write_array ...passed 00:08:50.771 Test: test_write_object ...passed 00:08:50.771 Test: test_write_nesting ...passed 00:08:50.771 Test: test_write_val ...passed 00:08:50.771 00:08:50.771 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.771 suites 1 1 n/a 0 0 00:08:50.771 tests 16 16 16 0 0 00:08:50.771 asserts 918 918 918 0 n/a 00:08:50.771 00:08:50.771 Elapsed time = 0.005 seconds 00:08:50.771 00:51:24 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:50.771 00:08:50.771 00:08:50.771 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.771 http://cunit.sourceforge.net/ 00:08:50.771 00:08:50.771 00:08:50.771 Suite: jsonrpc 00:08:50.771 Test: test_parse_request ...passed 00:08:50.771 Test: test_parse_request_streaming ...passed 00:08:50.771 00:08:50.771 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.771 suites 1 1 n/a 0 0 00:08:50.772 tests 2 2 2 0 0 00:08:50.772 asserts 289 289 289 0 n/a 00:08:50.772 00:08:50.772 Elapsed time = 0.005 seconds 00:08:50.772 00:08:50.772 real 0m0.163s 00:08:50.772 user 0m0.078s 00:08:50.772 sys 0m0.087s 00:08:50.772 00:51:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.772 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:08:50.772 ************************************ 00:08:50.772 END TEST unittest_json 00:08:50.772 ************************************ 00:08:50.772 00:51:25 -- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc 00:08:50.772 00:51:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.772 00:51:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.772 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:08:50.772 ************************************ 00:08:50.772 START TEST unittest_rpc 00:08:50.772 ************************************ 00:08:50.772 00:51:25 -- common/autotest_common.sh@1114 -- # unittest_rpc 00:08:50.772 00:51:25 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:50.772 00:08:50.772 00:08:50.772 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.772 http://cunit.sourceforge.net/ 00:08:50.772 00:08:50.772 00:08:50.772 Suite: rpc 00:08:50.772 Test: test_jsonrpc_handler ...passed 00:08:50.772 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:50.772 Test: test_rpc_get_methods ...passed 00:08:50.772 Test: test_rpc_spdk_get_version ...passed 00:08:50.772 Test: test_spdk_rpc_listen_close ...passed[2024-11-18 00:51:25.097503] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:50.772 00:08:50.772 00:08:50.772 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.772 suites 1 1 n/a 0 0 00:08:50.772 tests 5 5 5 0 0 00:08:50.772 asserts 20 20 20 0 n/a 00:08:50.772 00:08:50.772 Elapsed time = 0.000 seconds 00:08:50.772 00:08:50.772 real 0m0.041s 00:08:50.772 user 0m0.020s 00:08:50.772 sys 0m0.021s 00:08:50.772 00:51:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.772 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:08:50.772 ************************************ 00:08:50.772 END TEST unittest_rpc 00:08:50.772 ************************************ 00:08:51.032 00:51:25 -- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:51.032 00:51:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.032 00:51:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.032 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:08:51.032 ************************************ 00:08:51.032 START TEST unittest_notify 00:08:51.032 ************************************ 00:08:51.032 00:51:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:51.032 00:08:51.032 00:08:51.032 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.032 http://cunit.sourceforge.net/ 00:08:51.032 00:08:51.032 00:08:51.032 Suite: app_suite 00:08:51.032 Test: notify ...passed 00:08:51.032 00:08:51.032 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.032 suites 1 1 n/a 0 0 00:08:51.032 tests 1 1 1 0 0 00:08:51.032 asserts 13 13 13 0 n/a 00:08:51.032 00:08:51.032 Elapsed time = 0.000 seconds 00:08:51.032 00:08:51.032 real 0m0.040s 00:08:51.032 user 0m0.020s 00:08:51.032 sys 0m0.021s 00:08:51.032 00:51:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.032 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:08:51.032 ************************************ 00:08:51.032 END TEST unittest_notify 00:08:51.032 ************************************ 00:08:51.032 00:51:25 -- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme 00:08:51.032 00:51:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.032 00:51:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.032 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:08:51.032 ************************************ 00:08:51.032 START TEST unittest_nvme 00:08:51.032 ************************************ 00:08:51.032 00:51:25 -- common/autotest_common.sh@1114 -- # unittest_nvme 00:08:51.032 00:51:25 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:51.032 00:08:51.032 00:08:51.032 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.032 http://cunit.sourceforge.net/ 00:08:51.032 00:08:51.032 00:08:51.032 Suite: nvme 00:08:51.032 Test: test_opc_data_transfer ...passed 00:08:51.032 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:51.032 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:51.032 Test: test_trid_parse_and_compare ...[2024-11-18 00:51:25.314895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:51.032 [2024-11-18 00:51:25.315318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:51.032 [2024-11-18 00:51:25.315443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:51.032 [2024-11-18 00:51:25.315497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:51.032 [2024-11-18 00:51:25.315542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:08:51.032 [2024-11-18 00:51:25.315661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:51.032 passed 00:08:51.032 Test: test_trid_trtype_str ...passed 00:08:51.032 Test: test_trid_adrfam_str ...passed 00:08:51.032 Test: test_nvme_ctrlr_probe ...[2024-11-18 00:51:25.315928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:51.032 passed 00:08:51.032 Test: test_spdk_nvme_probe ...[2024-11-18 00:51:25.316056] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:51.032 [2024-11-18 00:51:25.316101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:51.032 [2024-11-18 00:51:25.316241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:51.032 [2024-11-18 00:51:25.316296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:51.032 passed 00:08:51.032 Test: test_spdk_nvme_connect ...[2024-11-18 00:51:25.316410] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:51.032 [2024-11-18 00:51:25.316827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:51.032 passed 00:08:51.032 Test: test_nvme_ctrlr_probe_internal ...[2024-11-18 00:51:25.316907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:08:51.032 [2024-11-18 00:51:25.317095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:51.032 [2024-11-18 00:51:25.317150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:51.032 passed 00:08:51.032 Test: test_nvme_init_controllers ...[2024-11-18 00:51:25.317254] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:51.032 passed 00:08:51.032 Test: test_nvme_driver_init ...[2024-11-18 00:51:25.317385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:51.032 [2024-11-18 00:51:25.317435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:51.032 [2024-11-18 00:51:25.426419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:51.032 passed 00:08:51.032 Test: test_spdk_nvme_detach ...passed 00:08:51.032 Test: test_nvme_completion_poll_cb ...passed 00:08:51.032 Test: test_nvme_user_copy_cmd_complete ...[2024-11-18 00:51:25.426686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:51.032 passed 00:08:51.032 Test: test_nvme_allocate_request_null ...passed 00:08:51.032 Test: test_nvme_allocate_request ...passed 00:08:51.032 Test: test_nvme_free_request ...passed 00:08:51.032 Test: test_nvme_allocate_request_user_copy ...passed 00:08:51.032 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:51.032 Test: test_nvme_request_check_timeout ...passed 00:08:51.032 Test: test_nvme_wait_for_completion ...passed 00:08:51.032 Test: test_spdk_nvme_parse_func ...passed 00:08:51.032 Test: test_spdk_nvme_detach_async ...passed 00:08:51.032 Test: test_nvme_parse_addr ...[2024-11-18 00:51:25.427928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:51.032 passed 00:08:51.032 00:08:51.032 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.032 suites 1 1 n/a 0 0 00:08:51.032 tests 25 25 25 0 0 00:08:51.032 asserts 326 326 326 0 n/a 00:08:51.032 00:08:51.032 Elapsed time = 0.007 seconds 00:08:51.292 00:51:25 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:51.292 00:08:51.292 00:08:51.292 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.292 http://cunit.sourceforge.net/ 00:08:51.292 00:08:51.292 00:08:51.292 Suite: nvme_ctrlr 00:08:51.292 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-18 00:51:25.475338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.292 passed 00:08:51.292 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-18 00:51:25.477250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.292 passed 00:08:51.292 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-18 00:51:25.478529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.292 passed 00:08:51.292 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-18 00:51:25.479761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.292 passed 00:08:51.293 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-18 00:51:25.481040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.293 [2024-11-18 00:51:25.482153] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 00:51:25.483369] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 00:51:25.484524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:51.293 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-18 00:51:25.486866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.293 [2024-11-18 00:51:25.489112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 00:51:25.490297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:51.293 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-18 00:51:25.492688] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.293 [2024-11-18 00:51:25.493877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 00:51:25.496191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:51.293 Test: test_nvme_ctrlr_init_delay ...[2024-11-18 00:51:25.498597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.293 passed 00:08:51.293 Test: test_alloc_io_qpair_rr_1 ...[2024-11-18 00:51:25.499951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.293 [2024-11-18 00:51:25.500132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:51.293 [2024-11-18 00:51:25.500425] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:51.293 [2024-11-18 00:51:25.500545] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:51.293 [2024-11-18 00:51:25.500638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:51.293 passed 00:08:51.293 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:51.293 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:51.293 Test: test_alloc_io_qpair_wrr_1 ...[2024-11-18 00:51:25.500855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.293 passed 00:08:51.293 Test: test_alloc_io_qpair_wrr_2 ...[2024-11-18 00:51:25.501161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.293 [2024-11-18 00:51:25.501367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:51.293 passed 00:08:51.293 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-18 00:51:25.501779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:51.293 [2024-11-18 00:51:25.502027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:51.293 [2024-11-18 00:51:25.502245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:51.293 passed 00:08:51.293 Test: test_nvme_ctrlr_fail ...[2024-11-18 00:51:25.502363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:51.293 [2024-11-18 00:51:25.502497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:51.293 passed 00:08:51.293 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:51.293 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:51.293 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:51.293 Test: test_nvme_ctrlr_test_active_ns ...[2024-11-18 00:51:25.502965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.552 passed 00:08:51.552 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:51.552 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:51.552 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:51.552 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-18 00:51:25.864333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.552 passed 00:08:51.552 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-18 00:51:25.871402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.552 passed 00:08:51.552 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-18 00:51:25.872640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.552 [2024-11-18 00:51:25.872731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:51.552 passed 00:08:51.552 Test: test_alloc_io_qpair_fail ...passed 00:08:51.552 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:51.552 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:08:51.552 Test: test_nvme_ctrlr_set_state ...passed 00:08:51.552 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-18 00:51:25.873877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.552 [2024-11-18 00:51:25.874010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:51.552 [2024-11-18 00:51:25.874178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:51.552 [2024-11-18 00:51:25.874232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.552 passed 00:08:51.553 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-18 00:51:25.900890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-18 00:51:25.941706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_reset ...[2024-11-18 00:51:25.943378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_aer_callback ...[2024-11-18 00:51:25.944017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-18 00:51:25.945542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:51.553 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:51.553 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-18 00:51:25.947647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:51.553 Test: test_nvme_ctrlr_ana_resize ...[2024-11-18 00:51:25.949227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:51.553 Test: test_nvme_transport_ctrlr_ready ...[2024-11-18 00:51:25.950914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:51.553 [2024-11-18 00:51:25.951066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:08:51.553 passed 00:08:51.553 Test: test_nvme_ctrlr_disable ...[2024-11-18 00:51:25.951200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:51.553 passed 00:08:51.553 00:08:51.553 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.553 suites 1 1 n/a 0 0 00:08:51.553 tests 43 43 43 0 0 00:08:51.553 asserts 10418 10418 10418 0 n/a 00:08:51.553 00:08:51.553 Elapsed time = 0.435 seconds 00:08:51.812 00:51:25 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:51.812 00:08:51.812 00:08:51.812 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.812 http://cunit.sourceforge.net/ 00:08:51.812 00:08:51.812 00:08:51.812 Suite: nvme_ctrlr_cmd 00:08:51.812 Test: test_get_log_pages ...passed 00:08:51.812 Test: test_set_feature_cmd ...passed 00:08:51.812 Test: test_set_feature_ns_cmd ...passed 00:08:51.812 Test: test_get_feature_cmd ...passed 00:08:51.812 Test: test_get_feature_ns_cmd ...passed 00:08:51.812 Test: test_abort_cmd ...passed 00:08:51.812 Test: test_set_host_id_cmds ...[2024-11-18 00:51:26.011539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:51.812 passed 00:08:51.812 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:51.812 Test: test_io_raw_cmd ...passed 00:08:51.812 Test: test_io_raw_cmd_with_md ...passed 00:08:51.812 Test: test_namespace_attach ...passed 00:08:51.812 Test: test_namespace_detach ...passed 00:08:51.812 Test: test_namespace_create ...passed 00:08:51.812 Test: test_namespace_delete ...passed 00:08:51.812 Test: test_doorbell_buffer_config ...passed 00:08:51.812 Test: test_format_nvme ...passed 00:08:51.812 Test: test_fw_commit ...passed 00:08:51.812 Test: test_fw_image_download ...passed 00:08:51.812 Test: test_sanitize ...passed 00:08:51.812 Test: test_directive ...passed 00:08:51.812 Test: test_nvme_request_add_abort ...passed 00:08:51.812 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:51.812 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:51.812 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:51.812 00:08:51.812 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.812 suites 1 1 n/a 0 0 00:08:51.812 tests 24 24 24 0 0 00:08:51.812 asserts 198 198 198 0 n/a 00:08:51.812 00:08:51.812 Elapsed time = 0.001 seconds 00:08:51.812 00:51:26 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:51.812 00:08:51.812 00:08:51.812 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.812 http://cunit.sourceforge.net/ 00:08:51.812 00:08:51.812 00:08:51.812 Suite: nvme_ctrlr_cmd 00:08:51.812 Test: test_geometry_cmd ...passed 00:08:51.812 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:51.812 00:08:51.812 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.812 suites 1 1 n/a 0 0 00:08:51.812 tests 2 2 2 0 0 00:08:51.812 asserts 7 7 7 0 n/a 00:08:51.812 00:08:51.812 Elapsed time = 0.000 seconds 00:08:51.812 00:51:26 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:51.812 00:08:51.812 00:08:51.812 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.812 http://cunit.sourceforge.net/ 00:08:51.812 00:08:51.812 00:08:51.813 Suite: nvme 00:08:51.813 Test: test_nvme_ns_construct ...passed 00:08:51.813 Test: test_nvme_ns_uuid ...passed 00:08:51.813 Test: test_nvme_ns_csi ...passed 00:08:51.813 Test: test_nvme_ns_data ...passed 00:08:51.813 Test: test_nvme_ns_set_identify_data ...passed 00:08:51.813 Test: test_spdk_nvme_ns_get_values ...passed 00:08:51.813 Test: test_spdk_nvme_ns_is_active ...passed 00:08:51.813 Test: spdk_nvme_ns_supports ...passed 00:08:51.813 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:51.813 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:51.813 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:51.813 Test: test_nvme_ns_find_id_desc ...passed 00:08:51.813 00:08:51.813 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.813 suites 1 1 n/a 0 0 00:08:51.813 tests 12 12 12 0 0 00:08:51.813 asserts 83 83 83 0 n/a 00:08:51.813 00:08:51.813 Elapsed time = 0.001 seconds 00:08:51.813 00:51:26 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:51.813 00:08:51.813 00:08:51.813 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.813 http://cunit.sourceforge.net/ 00:08:51.813 00:08:51.813 00:08:51.813 Suite: nvme_ns_cmd 00:08:51.813 Test: split_test ...passed 00:08:51.813 Test: split_test2 ...passed 00:08:51.813 Test: split_test3 ...passed 00:08:51.813 Test: split_test4 ...passed 00:08:51.813 Test: test_nvme_ns_cmd_flush ...passed 00:08:51.813 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:51.813 Test: test_nvme_ns_cmd_copy ...passed 00:08:51.813 Test: test_io_flags ...[2024-11-18 00:51:26.126308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:51.813 passed 00:08:51.813 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:51.813 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:51.813 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:51.813 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:51.813 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:51.813 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:51.813 Test: test_cmd_child_request ...passed 00:08:51.813 Test: test_nvme_ns_cmd_readv ...passed 00:08:51.813 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:51.813 Test: test_nvme_ns_cmd_writev ...[2024-11-18 00:51:26.128393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:51.813 passed 00:08:51.813 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:51.813 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:51.813 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:51.813 Test: test_nvme_ns_cmd_comparev ...passed 00:08:51.813 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:51.813 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:51.813 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:51.813 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:51.813 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:51.813 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-11-18 00:51:26.131073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:51.813 passed 00:08:51.813 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-11-18 00:51:26.131340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:51.813 passed 00:08:51.813 Test: test_nvme_ns_cmd_verify ...passed 00:08:51.813 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:51.813 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:51.813 00:08:51.813 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.813 suites 1 1 n/a 0 0 00:08:51.813 tests 32 32 32 0 0 00:08:51.813 asserts 550 550 550 0 n/a 00:08:51.813 00:08:51.813 Elapsed time = 0.004 seconds 00:08:51.813 00:51:26 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:51.813 00:08:51.813 00:08:51.813 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.813 http://cunit.sourceforge.net/ 00:08:51.813 00:08:51.813 00:08:51.813 Suite: nvme_ns_cmd 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:51.813 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:51.813 00:08:51.813 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.813 suites 1 1 n/a 0 0 00:08:51.813 tests 12 12 12 0 0 00:08:51.813 asserts 123 123 123 0 n/a 00:08:51.813 00:08:51.813 Elapsed time = 0.002 seconds 00:08:51.813 00:51:26 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:51.813 00:08:51.813 00:08:51.813 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.813 http://cunit.sourceforge.net/ 00:08:51.813 00:08:51.813 00:08:51.813 Suite: nvme_qpair 00:08:51.813 Test: test3 ...passed 00:08:51.813 Test: test_ctrlr_failed ...passed 00:08:51.813 Test: struct_packing ...passed 00:08:51.813 Test: test_nvme_qpair_process_completions ...[2024-11-18 00:51:26.210448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:51.813 [2024-11-18 00:51:26.210873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:51.813 [2024-11-18 00:51:26.210949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:51.813 passed 00:08:51.813 Test: test_nvme_completion_is_retry ...passed 00:08:51.813 Test: test_get_status_string ...passed 00:08:51.813 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-11-18 00:51:26.211068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:51.813 passed 00:08:51.813 Test: test_nvme_qpair_submit_request ...passed 00:08:51.813 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:51.813 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:51.813 Test: test_nvme_qpair_init_deinit ...passed 00:08:51.813 Test: test_nvme_get_sgl_print_info ...passed 00:08:51.813 00:08:51.813 [2024-11-18 00:51:26.211603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:51.813 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.813 suites 1 1 n/a 0 0 00:08:51.813 tests 12 12 12 0 0 00:08:51.813 asserts 154 154 154 0 n/a 00:08:51.813 00:08:51.813 Elapsed time = 0.002 seconds 00:08:52.073 00:51:26 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:52.073 00:08:52.073 00:08:52.073 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.073 http://cunit.sourceforge.net/ 00:08:52.073 00:08:52.073 00:08:52.073 Suite: nvme_pcie 00:08:52.073 Test: test_prp_list_append ...[2024-11-18 00:51:26.255812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:52.073 [2024-11-18 00:51:26.256107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:52.073 [2024-11-18 00:51:26.256148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:52.073 [2024-11-18 00:51:26.256341] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:52.073 passed 00:08:52.073 Test: test_nvme_pcie_hotplug_monitor ...passed 00:08:52.073 Test: test_shadow_doorbell_update ...passed 00:08:52.073 Test: test_build_contig_hw_sgl_request ...passed 00:08:52.073 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:52.073 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:52.073 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:52.073 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:08:52.073 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:52.073 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed[2024-11-18 00:51:26.256411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:52.073 [2024-11-18 00:51:26.256569] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:52.073 00:08:52.073 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:52.073 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:08:52.073 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:08:52.073 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:08:52.073 00:08:52.073 [2024-11-18 00:51:26.256638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:52.073 [2024-11-18 00:51:26.256703] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:52.073 [2024-11-18 00:51:26.256752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:52.073 [2024-11-18 00:51:26.256795] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:52.073 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.073 suites 1 1 n/a 0 0 00:08:52.073 tests 14 14 14 0 0 00:08:52.073 asserts 235 235 235 0 n/a 00:08:52.073 00:08:52.073 Elapsed time = 0.001 seconds 00:08:52.073 00:51:26 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:52.073 00:08:52.073 00:08:52.073 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.073 http://cunit.sourceforge.net/ 00:08:52.073 00:08:52.073 00:08:52.073 Suite: nvme_ns_cmd 00:08:52.073 Test: nvme_poll_group_create_test ...passed 00:08:52.073 Test: nvme_poll_group_add_remove_test ...passed 00:08:52.073 Test: nvme_poll_group_process_completions ...passed 00:08:52.073 Test: nvme_poll_group_destroy_test ...passed 00:08:52.073 Test: nvme_poll_group_get_free_stats ...passed 00:08:52.073 00:08:52.073 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.073 suites 1 1 n/a 0 0 00:08:52.073 tests 5 5 5 0 0 00:08:52.073 asserts 75 75 75 0 n/a 00:08:52.073 00:08:52.073 Elapsed time = 0.001 seconds 00:08:52.073 00:51:26 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:52.073 00:08:52.073 00:08:52.073 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.073 http://cunit.sourceforge.net/ 00:08:52.073 00:08:52.073 00:08:52.073 Suite: nvme_quirks 00:08:52.073 Test: test_nvme_quirks_striping ...passed 00:08:52.073 00:08:52.073 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.073 suites 1 1 n/a 0 0 00:08:52.073 tests 1 1 1 0 0 00:08:52.073 asserts 5 5 5 0 n/a 00:08:52.073 00:08:52.073 Elapsed time = 0.000 seconds 00:08:52.073 00:51:26 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:52.073 00:08:52.073 00:08:52.073 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.073 http://cunit.sourceforge.net/ 00:08:52.073 00:08:52.073 00:08:52.073 Suite: nvme_tcp 00:08:52.073 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:52.073 Test: test_nvme_tcp_build_iovs ...passed 00:08:52.073 Test: test_nvme_tcp_build_sgl_request ...[2024-11-18 00:51:26.375641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffec8d96a40, and the iovcnt=16, remaining_size=28672 00:08:52.073 passed 00:08:52.073 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:52.073 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:52.073 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:52.073 Test: test_nvme_tcp_req_get ...passed 00:08:52.073 Test: test_nvme_tcp_req_init ...passed 00:08:52.073 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:52.073 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:52.074 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:08:52.074 Test: test_nvme_tcp_alloc_reqs ...[2024-11-18 00:51:26.376451] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d98760 is same with the state(6) to be set 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-11-18 00:51:26.376865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d978f0 is same with the state(5) to be set 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-18 00:51:26.376944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffec8d98420 00:08:52.074 [2024-11-18 00:51:26.377018] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:52.074 [2024-11-18 00:51:26.377135] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.377208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:52.074 [2024-11-18 00:51:26.377311] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.377372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:52.074 [2024-11-18 00:51:26.377417] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.377475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.377537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.377605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-18 00:51:26.377653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.377711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97db0 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.377903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:52.074 [2024-11-18 00:51:26.377968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:52.074 Test: test_nvme_tcp_c2h_payload_handle ...[2024-11-18 00:51:26.378416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:52.074 [2024-11-18 00:51:26.378581] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffec8d97f60): PDU Sequence Error 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_icresp_handle ...[2024-11-18 00:51:26.378727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:52.074 [2024-11-18 00:51:26.378781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:52.074 [2024-11-18 00:51:26.378835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97900 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.378906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:52.074 [2024-11-18 00:51:26.378964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97900 is same with the state(5) to be set 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_pdu_payload_handle ...[2024-11-18 00:51:26.379035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d97900 is same with the state(0) to be set 00:08:52.074 [2024-11-18 00:51:26.379136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffec8d98420): PDU Sequence Error 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:08:52.074 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-11-18 00:51:26.379246] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffec8d96be0 00:08:52.074 passed 00:08:52.074 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-18 00:51:26.379458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffec8d96260, errno=0, rc=0 00:08:52.074 [2024-11-18 00:51:26.379530] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d96260 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.379620] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec8d96260 is same with the state(5) to be set 00:08:52.074 [2024-11-18 00:51:26.379691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffec8d96260 (0): Success 00:08:52.074 [2024-11-18 00:51:26.379741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffec8d96260 (0): Success 00:08:52.074 passed 00:08:52.334 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-18 00:51:26.543994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:52.334 [2024-11-18 00:51:26.544138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:52.334 passed 00:08:52.334 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:52.334 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:08:52.334 Test: test_nvme_tcp_ctrlr_construct ...[2024-11-18 00:51:26.544433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:52.334 [2024-11-18 00:51:26.544492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:52.334 [2024-11-18 00:51:26.544741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:52.334 [2024-11-18 00:51:26.544796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:52.334 [2024-11-18 00:51:26.544951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:52.334 [2024-11-18 00:51:26.545037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:52.334 [2024-11-18 00:51:26.545175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:08:52.334 [2024-11-18 00:51:26.545255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:52.334 passed 00:08:52.334 Test: test_nvme_tcp_qpair_submit_request ...passed[2024-11-18 00:51:26.545454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:08:52.334 [2024-11-18 00:51:26.545513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:52.334 00:08:52.334 00:08:52.334 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.334 suites 1 1 n/a 0 0 00:08:52.334 tests 27 27 27 0 0 00:08:52.334 asserts 624 624 624 0 n/a 00:08:52.334 00:08:52.334 Elapsed time = 0.170 seconds 00:08:52.334 00:51:26 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:52.334 00:08:52.334 00:08:52.334 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.334 http://cunit.sourceforge.net/ 00:08:52.334 00:08:52.334 00:08:52.334 Suite: nvme_transport 00:08:52.334 Test: test_nvme_get_transport ...passed 00:08:52.334 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:52.334 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:52.334 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:52.334 Test: test_ctrlr_get_memory_domains ...passed 00:08:52.334 00:08:52.334 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.334 suites 1 1 n/a 0 0 00:08:52.334 tests 5 5 5 0 0 00:08:52.334 asserts 28 28 28 0 n/a 00:08:52.334 00:08:52.334 Elapsed time = 0.000 seconds 00:08:52.334 00:51:26 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:52.334 00:08:52.334 00:08:52.334 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.334 http://cunit.sourceforge.net/ 00:08:52.334 00:08:52.334 00:08:52.334 Suite: nvme_io_msg 00:08:52.334 Test: test_nvme_io_msg_send ...passed 00:08:52.334 Test: test_nvme_io_msg_process ...passed 00:08:52.334 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:52.334 00:08:52.334 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.334 suites 1 1 n/a 0 0 00:08:52.334 tests 3 3 3 0 0 00:08:52.334 asserts 56 56 56 0 n/a 00:08:52.334 00:08:52.334 Elapsed time = 0.000 seconds 00:08:52.334 00:51:26 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:52.334 00:08:52.334 00:08:52.334 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.334 http://cunit.sourceforge.net/ 00:08:52.334 00:08:52.334 00:08:52.334 Suite: nvme_pcie_common 00:08:52.334 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:08:52.334 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-11-18 00:51:26.681300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:52.334 passed 00:08:52.334 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:52.334 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-18 00:51:26.682034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:52.334 passed 00:08:52.334 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-11-18 00:51:26.682171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:52.334 [2024-11-18 00:51:26.682210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:52.334 passed 00:08:52.334 Test: test_nvme_pcie_poll_group_get_stats ...[2024-11-18 00:51:26.682626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:52.334 [2024-11-18 00:51:26.682671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:52.334 passed 00:08:52.334 00:08:52.334 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.334 suites 1 1 n/a 0 0 00:08:52.334 tests 6 6 6 0 0 00:08:52.334 asserts 148 148 148 0 n/a 00:08:52.334 00:08:52.334 Elapsed time = 0.001 seconds 00:08:52.334 00:51:26 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:52.334 00:08:52.334 00:08:52.334 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.334 http://cunit.sourceforge.net/ 00:08:52.334 00:08:52.334 00:08:52.334 Suite: nvme_fabric 00:08:52.334 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:52.334 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:52.334 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:52.334 Test: test_nvme_fabric_discover_probe ...passed 00:08:52.334 Test: test_nvme_fabric_qpair_connect ...[2024-11-18 00:51:26.717919] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:52.334 passed 00:08:52.334 00:08:52.334 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.334 suites 1 1 n/a 0 0 00:08:52.334 tests 5 5 5 0 0 00:08:52.334 asserts 60 60 60 0 n/a 00:08:52.334 00:08:52.334 Elapsed time = 0.001 seconds 00:08:52.594 00:51:26 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:52.594 00:08:52.594 00:08:52.594 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.594 http://cunit.sourceforge.net/ 00:08:52.594 00:08:52.594 00:08:52.594 Suite: nvme_opal 00:08:52.594 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:52.594 Test: test_opal_add_short_atom_header ...passed 00:08:52.594 00:08:52.594 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.594 suites 1 1 n/a 0 0 00:08:52.594 tests 2 2 2 0 0 00:08:52.594 asserts 22 22 22 0 n/a 00:08:52.594 00:08:52.594 Elapsed time = 0.000 seconds 00:08:52.594 [2024-11-18 00:51:26.757322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:52.594 00:08:52.594 real 0m1.483s 00:08:52.594 user 0m0.731s 00:08:52.594 sys 0m0.602s 00:08:52.594 ************************************ 00:08:52.594 END TEST unittest_nvme 00:08:52.594 ************************************ 00:08:52.594 00:51:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.594 00:51:26 -- common/autotest_common.sh@10 -- # set +x 00:08:52.594 00:51:26 -- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:52.594 00:51:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:52.594 00:51:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.594 00:51:26 -- common/autotest_common.sh@10 -- # set +x 00:08:52.594 ************************************ 00:08:52.594 START TEST unittest_log 00:08:52.594 ************************************ 00:08:52.594 00:51:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:52.594 00:08:52.594 00:08:52.594 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.594 http://cunit.sourceforge.net/ 00:08:52.594 00:08:52.594 00:08:52.594 Suite: log 00:08:52.594 Test: log_test ...passed 00:08:52.594 Test: deprecation ...[2024-11-18 00:51:26.863673] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:08:52.594 [2024-11-18 00:51:26.864005] log_ut.c: 55:log_test: *DEBUG*: log test 00:08:52.594 log dump test: 00:08:52.594 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:52.594 spdk dump test: 00:08:52.594 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:52.594 spdk dump test: 00:08:52.594 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:52.594 00000010 65 20 63 68 61 72 73 e chars 00:08:53.531 passed 00:08:53.531 00:08:53.531 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.531 suites 1 1 n/a 0 0 00:08:53.531 tests 2 2 2 0 0 00:08:53.531 asserts 73 73 73 0 n/a 00:08:53.531 00:08:53.531 Elapsed time = 0.001 seconds 00:08:53.531 00:08:53.531 real 0m1.043s 00:08:53.531 user 0m0.016s 00:08:53.531 sys 0m0.028s 00:08:53.531 00:51:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.531 00:51:27 -- common/autotest_common.sh@10 -- # set +x 00:08:53.531 ************************************ 00:08:53.531 END TEST unittest_log 00:08:53.531 ************************************ 00:08:53.792 00:51:27 -- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:53.792 00:51:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:53.792 00:51:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.792 00:51:27 -- common/autotest_common.sh@10 -- # set +x 00:08:53.792 ************************************ 00:08:53.792 START TEST unittest_lvol 00:08:53.792 ************************************ 00:08:53.792 00:51:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:53.792 00:08:53.792 00:08:53.792 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.792 http://cunit.sourceforge.net/ 00:08:53.792 00:08:53.792 00:08:53.792 Suite: lvol 00:08:53.792 Test: lvs_init_unload_success ...[2024-11-18 00:51:27.983941] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:53.792 passed 00:08:53.792 Test: lvs_init_destroy_success ...[2024-11-18 00:51:27.984627] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:53.792 passed 00:08:53.793 Test: lvs_init_opts_success ...passed 00:08:53.793 Test: lvs_unload_lvs_is_null_fail ...passed 00:08:53.793 Test: lvs_names ...[2024-11-18 00:51:27.984930] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:53.793 [2024-11-18 00:51:27.985014] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:53.793 [2024-11-18 00:51:27.985093] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:53.793 [2024-11-18 00:51:27.985323] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:53.793 passed 00:08:53.793 Test: lvol_create_destroy_success ...passed 00:08:53.793 Test: lvol_create_fail ...[2024-11-18 00:51:27.986067] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:53.793 [2024-11-18 00:51:27.986276] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:53.793 passed 00:08:53.793 Test: lvol_destroy_fail ...[2024-11-18 00:51:27.986667] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:53.793 passed 00:08:53.793 Test: lvol_close ...passed 00:08:53.793 Test: lvol_resize ...[2024-11-18 00:51:27.986937] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:53.793 [2024-11-18 00:51:27.987006] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:53.793 passed 00:08:53.793 Test: lvol_set_read_only ...passed 00:08:53.793 Test: test_lvs_load ...[2024-11-18 00:51:27.987986] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:53.793 passed 00:08:53.793 Test: lvols_load ...[2024-11-18 00:51:27.988034] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:53.793 [2024-11-18 00:51:27.988342] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:53.793 passed 00:08:53.793 Test: lvol_open ...[2024-11-18 00:51:27.988552] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:53.793 passed 00:08:53.793 Test: lvol_snapshot ...passed 00:08:53.793 Test: lvol_snapshot_fail ...passed 00:08:53.793 Test: lvol_clone ...[2024-11-18 00:51:27.989420] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:53.793 passed 00:08:53.793 Test: lvol_clone_fail ...passed 00:08:53.793 Test: lvol_iter_clones ...[2024-11-18 00:51:27.990090] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:53.793 passed 00:08:53.793 Test: lvol_refcnt ...[2024-11-18 00:51:27.990730] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 2f3af5dd-ed15-4209-886c-4d42ba7258df because it is still open 00:08:53.793 passed 00:08:53.793 Test: lvol_names ...[2024-11-18 00:51:27.990997] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:53.793 [2024-11-18 00:51:27.991102] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:53.793 passed 00:08:53.793 Test: lvol_create_thin_provisioned ...[2024-11-18 00:51:27.991416] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:53.793 passed 00:08:53.793 Test: lvol_rename ...[2024-11-18 00:51:27.991933] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:53.793 [2024-11-18 00:51:27.992059] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:53.793 passed 00:08:53.793 Test: lvs_rename ...passed 00:08:53.793 Test: lvol_inflate ...[2024-11-18 00:51:27.992343] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:53.793 passed 00:08:53.793 Test: lvol_decouple_parent ...[2024-11-18 00:51:27.992609] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:53.793 [2024-11-18 00:51:27.992890] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:53.793 passed 00:08:53.793 Test: lvol_get_xattr ...passed 00:08:53.793 Test: lvol_esnap_reload ...passed 00:08:53.793 Test: lvol_esnap_create_bad_args ...[2024-11-18 00:51:27.993477] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:53.793 [2024-11-18 00:51:27.993525] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:53.793 [2024-11-18 00:51:27.993589] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:53.793 [2024-11-18 00:51:27.993734] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:53.793 [2024-11-18 00:51:27.993888] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:53.793 passed 00:08:53.793 Test: lvol_esnap_create_delete ...passed 00:08:53.793 Test: lvol_esnap_load_esnaps ...passed 00:08:53.793 Test: lvol_esnap_missing ...[2024-11-18 00:51:27.994268] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:53.793 [2024-11-18 00:51:27.994460] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:53.793 [2024-11-18 00:51:27.994514] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:53.793 passed 00:08:53.793 Test: lvol_esnap_hotplug ... 00:08:53.793 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:53.793 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:53.793 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:53.793 [2024-11-18 00:51:27.995284] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 5e0bbeef-41a7-487f-9710-7b07d73dc895: failed to create esnap bs_dev: error -12 00:08:53.793 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:53.793 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:53.793 [2024-11-18 00:51:27.995506] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 819a0713-7077-4d67-bda6-331cd9ae13ca: failed to create esnap bs_dev: error -12 00:08:53.793 [2024-11-18 00:51:27.995620] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 70c81e90-2316-49cb-9f06-4275b5be4f7c: failed to create esnap bs_dev: error -12 00:08:53.793 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:53.793 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:53.793 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:53.793 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:53.793 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:53.793 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:53.793 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:53.793 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:53.793 passed 00:08:53.793 Test: lvol_get_by ...passed 00:08:53.793 00:08:53.793 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.793 suites 1 1 n/a 0 0 00:08:53.793 tests 34 34 34 0 0 00:08:53.793 asserts 1439 1439 1439 0 n/a 00:08:53.793 00:08:53.793 Elapsed time = 0.013 seconds 00:08:53.793 00:08:53.793 real 0m0.063s 00:08:53.793 user 0m0.021s 00:08:53.793 sys 0m0.042s 00:08:53.793 00:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.793 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.793 ************************************ 00:08:53.793 END TEST unittest_lvol 00:08:53.793 ************************************ 00:08:53.793 00:51:28 -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:53.793 00:51:28 -- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:53.793 00:51:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:53.793 00:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.793 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.793 ************************************ 00:08:53.793 START TEST unittest_nvme_rdma 00:08:53.793 ************************************ 00:08:53.793 00:51:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:53.793 00:08:53.793 00:08:53.793 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.793 http://cunit.sourceforge.net/ 00:08:53.793 00:08:53.793 00:08:53.793 Suite: nvme_rdma 00:08:53.793 Test: test_nvme_rdma_build_sgl_request ...[2024-11-18 00:51:28.104001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:53.793 passed 00:08:53.793 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:53.793 Test: test_nvme_rdma_build_contig_request ...[2024-11-18 00:51:28.104434] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:53.793 [2024-11-18 00:51:28.104558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:53.793 [2024-11-18 00:51:28.104649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:53.793 passed 00:08:53.793 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:53.793 Test: test_nvme_rdma_create_reqs ...[2024-11-18 00:51:28.104804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:53.793 passed 00:08:53.794 Test: test_nvme_rdma_create_rsps ...[2024-11-18 00:51:28.105240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:53.794 passed 00:08:53.794 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:08:53.794 Test: test_nvme_rdma_poller_create ...[2024-11-18 00:51:28.105481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:53.794 [2024-11-18 00:51:28.105554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:53.794 passed 00:08:53.794 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:53.794 Test: test_nvme_rdma_ctrlr_construct ...[2024-11-18 00:51:28.105777] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:53.794 passed 00:08:53.794 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:53.794 Test: test_nvme_rdma_req_init ...passed 00:08:53.794 Test: test_nvme_rdma_validate_cm_event ...passed 00:08:53.794 Test: test_nvme_rdma_qpair_init ...passed 00:08:53.794 Test: test_nvme_rdma_qpair_submit_request ...[2024-11-18 00:51:28.106113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:53.794 [2024-11-18 00:51:28.106192] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:53.794 passed 00:08:53.794 Test: test_nvme_rdma_memory_domain ...passed 00:08:53.794 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:53.794 Test: test_rdma_get_memory_translation ...passed 00:08:53.794 Test: test_get_rdma_qpair_from_wc ...passed 00:08:53.794 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:53.794 Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-18 00:51:28.106417] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:53.794 [2024-11-18 00:51:28.106531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:53.794 [2024-11-18 00:51:28.106602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:53.794 passed 00:08:53.794 Test: test_nvme_rdma_qpair_set_poller ...[2024-11-18 00:51:28.106732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:53.794 [2024-11-18 00:51:28.106790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:53.794 [2024-11-18 00:51:28.106924] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:53.794 [2024-11-18 00:51:28.106988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:53.794 [2024-11-18 00:51:28.107038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffdb7310be0 on poll group 0x60b0000001a0 00:08:53.794 [2024-11-18 00:51:28.107129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:53.794 [2024-11-18 00:51:28.107190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:53.794 [2024-11-18 00:51:28.107239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffdb7310be0 on poll group 0x60b0000001a0 00:08:53.794 passed 00:08:53.794 00:08:53.794 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.794 suites 1 1 n/a 0 0 00:08:53.794 tests 22 22 22 0 0 00:08:53.794 asserts 412 412 412 0 n/a 00:08:53.794 00:08:53.794 Elapsed time = 0.004 seconds 00:08:53.794 [2024-11-18 00:51:28.107366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:53.794 00:08:53.794 real 0m0.049s 00:08:53.794 user 0m0.020s 00:08:53.794 sys 0m0.029s 00:08:53.794 00:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.794 ************************************ 00:08:53.794 END TEST unittest_nvme_rdma 00:08:53.794 ************************************ 00:08:53.794 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.794 00:51:28 -- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:53.794 00:51:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:53.794 00:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.794 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.054 ************************************ 00:08:54.054 START TEST unittest_nvmf_transport 00:08:54.054 ************************************ 00:08:54.054 00:51:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:54.054 00:08:54.054 00:08:54.054 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.054 http://cunit.sourceforge.net/ 00:08:54.054 00:08:54.054 00:08:54.054 Suite: nvmf 00:08:54.054 Test: test_spdk_nvmf_transport_create ...[2024-11-18 00:51:28.225814] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:54.054 [2024-11-18 00:51:28.226276] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:54.054 [2024-11-18 00:51:28.226358] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:54.054 passed 00:08:54.054 Test: test_nvmf_transport_poll_group_create ...passed 00:08:54.054 Test: test_spdk_nvmf_transport_opts_init ...[2024-11-18 00:51:28.226532] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:54.054 [2024-11-18 00:51:28.226868] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:54.054 passed 00:08:54.054 Test: test_spdk_nvmf_transport_listen_ext ...passed[2024-11-18 00:51:28.227006] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:54.054 [2024-11-18 00:51:28.227051] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:54.054 00:08:54.054 00:08:54.054 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.054 suites 1 1 n/a 0 0 00:08:54.054 tests 4 4 4 0 0 00:08:54.054 asserts 49 49 49 0 n/a 00:08:54.054 00:08:54.054 Elapsed time = 0.002 seconds 00:08:54.054 00:08:54.054 real 0m0.052s 00:08:54.054 user 0m0.025s 00:08:54.054 sys 0m0.027s 00:08:54.054 00:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.054 ************************************ 00:08:54.054 END TEST unittest_nvmf_transport 00:08:54.054 ************************************ 00:08:54.054 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.054 00:51:28 -- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:54.054 00:51:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.054 00:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.054 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.054 ************************************ 00:08:54.054 START TEST unittest_rdma 00:08:54.054 ************************************ 00:08:54.054 00:51:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:54.054 00:08:54.054 00:08:54.054 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.054 http://cunit.sourceforge.net/ 00:08:54.054 00:08:54.054 00:08:54.054 Suite: rdma_common 00:08:54.054 Test: test_spdk_rdma_pd ...[2024-11-18 00:51:28.338678] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:54.054 [2024-11-18 00:51:28.339193] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:54.054 passed 00:08:54.054 00:08:54.054 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.054 suites 1 1 n/a 0 0 00:08:54.054 tests 1 1 1 0 0 00:08:54.054 asserts 31 31 31 0 n/a 00:08:54.054 00:08:54.054 Elapsed time = 0.001 seconds 00:08:54.054 00:08:54.054 real 0m0.039s 00:08:54.054 user 0m0.020s 00:08:54.054 sys 0m0.019s 00:08:54.054 00:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.054 ************************************ 00:08:54.054 END TEST unittest_rdma 00:08:54.054 ************************************ 00:08:54.054 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.054 00:51:28 -- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:54.054 00:51:28 -- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:54.054 00:51:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.054 00:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.054 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.054 ************************************ 00:08:54.054 START TEST unittest_nvme_cuse 00:08:54.054 ************************************ 00:08:54.054 00:51:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:54.054 00:08:54.054 00:08:54.054 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.054 http://cunit.sourceforge.net/ 00:08:54.054 00:08:54.054 00:08:54.054 Suite: nvme_cuse 00:08:54.054 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:54.054 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:54.054 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:54.054 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:54.054 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:54.054 Test: test_cuse_nvme_submit_io ...[2024-11-18 00:51:28.448743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:54.054 passed 00:08:54.054 Test: test_cuse_nvme_reset ...[2024-11-18 00:51:28.449139] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:54.054 passed 00:08:54.054 Test: test_nvme_cuse_stop ...passed 00:08:54.054 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:54.054 00:08:54.054 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.054 suites 1 1 n/a 0 0 00:08:54.054 tests 9 9 9 0 0 00:08:54.054 asserts 121 121 121 0 n/a 00:08:54.054 00:08:54.054 Elapsed time = 0.002 seconds 00:08:54.315 00:08:54.315 real 0m0.045s 00:08:54.315 user 0m0.037s 00:08:54.315 sys 0m0.009s 00:08:54.315 00:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.315 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.315 ************************************ 00:08:54.315 END TEST unittest_nvme_cuse 00:08:54.315 ************************************ 00:08:54.315 00:51:28 -- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf 00:08:54.315 00:51:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.315 00:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.315 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.315 ************************************ 00:08:54.315 START TEST unittest_nvmf 00:08:54.315 ************************************ 00:08:54.315 00:51:28 -- common/autotest_common.sh@1114 -- # unittest_nvmf 00:08:54.315 00:51:28 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:54.315 00:08:54.315 00:08:54.315 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.315 http://cunit.sourceforge.net/ 00:08:54.315 00:08:54.315 00:08:54.315 Suite: nvmf 00:08:54.315 Test: test_get_log_page ...passed 00:08:54.315 Test: test_process_fabrics_cmd ...passed 00:08:54.315 Test: test_connect ...[2024-11-18 00:51:28.568548] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:54.315 [2024-11-18 00:51:28.569580] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:54.315 [2024-11-18 00:51:28.569707] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:54.315 [2024-11-18 00:51:28.569786] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:54.315 [2024-11-18 00:51:28.569842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:54.315 [2024-11-18 00:51:28.569965] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:54.315 [2024-11-18 00:51:28.570013] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:54.315 [2024-11-18 00:51:28.570173] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:54.315 [2024-11-18 00:51:28.570224] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:54.315 [2024-11-18 00:51:28.570345] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:54.315 [2024-11-18 00:51:28.570435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:54.315 [2024-11-18 00:51:28.570768] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:54.315 [2024-11-18 00:51:28.570855] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:54.315 [2024-11-18 00:51:28.570978] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:54.315 [2024-11-18 00:51:28.571061] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:54.315 [2024-11-18 00:51:28.571186] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:08:54.315 passed 00:08:54.315 Test: test_get_ns_id_desc_list ...passed 00:08:54.315 Test: test_identify_ns ...[2024-11-18 00:51:28.571338] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:08:54.315 [2024-11-18 00:51:28.571588] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:54.315 [2024-11-18 00:51:28.571810] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:54.315 [2024-11-18 00:51:28.571960] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:54.315 passed 00:08:54.315 Test: test_identify_ns_iocs_specific ...[2024-11-18 00:51:28.572101] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:54.315 passed 00:08:54.315 Test: test_reservation_write_exclusive ...passed 00:08:54.315 Test: test_reservation_exclusive_access ...passed 00:08:54.315 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...[2024-11-18 00:51:28.572417] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:54.315 passed 00:08:54.315 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:54.315 Test: test_reservation_notification_log_page ...passed 00:08:54.315 Test: test_get_dif_ctx ...passed 00:08:54.315 Test: test_set_get_features ...[2024-11-18 00:51:28.573040] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:54.315 [2024-11-18 00:51:28.573093] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:54.315 passed 00:08:54.315 Test: test_identify_ctrlr ...passed 00:08:54.315 Test: test_identify_ctrlr_iocs_specific ...[2024-11-18 00:51:28.573149] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:54.315 [2024-11-18 00:51:28.573220] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:54.315 passed 00:08:54.315 Test: test_custom_admin_cmd ...passed 00:08:54.315 Test: test_fused_compare_and_write ...[2024-11-18 00:51:28.573710] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:54.315 [2024-11-18 00:51:28.573757] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:54.315 [2024-11-18 00:51:28.573815] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:54.315 passed 00:08:54.315 Test: test_multi_async_event_reqs ...passed 00:08:54.315 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:54.315 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:54.315 Test: test_multi_async_events ...passed 00:08:54.315 Test: test_rae ...passed 00:08:54.315 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:54.315 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:54.315 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:08:54.315 Test: test_zcopy_read ...passed 00:08:54.315 Test: test_zcopy_write ...passed 00:08:54.315 Test: test_nvmf_property_set ...[2024-11-18 00:51:28.574508] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:08:54.315 passed 00:08:54.315 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:08:54.315 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:08:54.315 00:08:54.315 [2024-11-18 00:51:28.574708] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:54.315 [2024-11-18 00:51:28.574802] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:54.315 [2024-11-18 00:51:28.574861] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:54.315 [2024-11-18 00:51:28.574923] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:54.316 [2024-11-18 00:51:28.574969] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:54.316 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.316 suites 1 1 n/a 0 0 00:08:54.316 tests 30 30 30 0 0 00:08:54.316 asserts 885 885 885 0 n/a 00:08:54.316 00:08:54.316 Elapsed time = 0.007 seconds 00:08:54.316 00:51:28 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:54.316 00:08:54.316 00:08:54.316 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.316 http://cunit.sourceforge.net/ 00:08:54.316 00:08:54.316 00:08:54.316 Suite: nvmf 00:08:54.316 Test: test_get_rw_params ...passed 00:08:54.316 Test: test_lba_in_range ...passed 00:08:54.316 Test: test_get_dif_ctx ...passed 00:08:54.316 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:54.316 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:08:54.316 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-11-18 00:51:28.625699] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:54.316 [2024-11-18 00:51:28.626064] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:54.316 [2024-11-18 00:51:28.626202] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:54.316 [2024-11-18 00:51:28.626267] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:54.316 [2024-11-18 00:51:28.626378] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:54.316 passed 00:08:54.316 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:08:54.316 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:54.316 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:54.316 00:08:54.316 [2024-11-18 00:51:28.626504] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:54.316 [2024-11-18 00:51:28.626554] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:54.316 [2024-11-18 00:51:28.626648] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:54.316 [2024-11-18 00:51:28.626693] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:54.316 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.316 suites 1 1 n/a 0 0 00:08:54.316 tests 9 9 9 0 0 00:08:54.316 asserts 157 157 157 0 n/a 00:08:54.316 00:08:54.316 Elapsed time = 0.001 seconds 00:08:54.316 00:51:28 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:54.316 00:08:54.316 00:08:54.316 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.316 http://cunit.sourceforge.net/ 00:08:54.316 00:08:54.316 00:08:54.316 Suite: nvmf 00:08:54.316 Test: test_discovery_log ...passed 00:08:54.316 Test: test_discovery_log_with_filters ...passed 00:08:54.316 00:08:54.316 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.316 suites 1 1 n/a 0 0 00:08:54.316 tests 2 2 2 0 0 00:08:54.316 asserts 238 238 238 0 n/a 00:08:54.316 00:08:54.316 Elapsed time = 0.004 seconds 00:08:54.316 00:51:28 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:54.577 00:08:54.577 00:08:54.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.577 http://cunit.sourceforge.net/ 00:08:54.577 00:08:54.577 00:08:54.577 Suite: nvmf 00:08:54.577 Test: nvmf_test_create_subsystem ...[2024-11-18 00:51:28.722784] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:54.577 [2024-11-18 00:51:28.723273] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:54.577 [2024-11-18 00:51:28.723415] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:54.577 [2024-11-18 00:51:28.723471] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:54.577 [2024-11-18 00:51:28.723518] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:54.577 [2024-11-18 00:51:28.723578] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:54.577 [2024-11-18 00:51:28.723725] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:54.577 [2024-11-18 00:51:28.723949] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:54.577 passed 00:08:54.577 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-18 00:51:28.724083] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:54.577 [2024-11-18 00:51:28.724129] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:54.577 [2024-11-18 00:51:28.724177] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:54.577 passed 00:08:54.577 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:54.577 Test: test_reservation_register ...[2024-11-18 00:51:28.724425] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:54.577 [2024-11-18 00:51:28.724559] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:54.577 passed 00:08:54.577 Test: test_reservation_register_with_ptpl ...[2024-11-18 00:51:28.724921] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 [2024-11-18 00:51:28.725139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:54.577 passed 00:08:54.577 Test: test_reservation_acquire_preempt_1 ...passed 00:08:54.577 Test: test_reservation_acquire_release_with_ptpl ...[2024-11-18 00:51:28.726429] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 passed 00:08:54.577 Test: test_reservation_release ...passed 00:08:54.577 Test: test_reservation_unregister_notification ...[2024-11-18 00:51:28.728350] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 [2024-11-18 00:51:28.728628] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 passed 00:08:54.577 Test: test_reservation_release_notification ...passed 00:08:54.577 Test: test_reservation_release_notification_write_exclusive ...[2024-11-18 00:51:28.728925] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 [2024-11-18 00:51:28.729209] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 passed 00:08:54.577 Test: test_reservation_clear_notification ...passed 00:08:54.577 Test: test_reservation_preempt_notification ...[2024-11-18 00:51:28.729478] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 passed 00:08:54.577 Test: test_spdk_nvmf_ns_event ...[2024-11-18 00:51:28.729787] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:54.577 passed 00:08:54.577 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:54.577 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:54.577 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:08:54.577 Test: test_nvmf_ns_reservation_report ...[2024-11-18 00:51:28.730815] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:54.577 [2024-11-18 00:51:28.730944] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:08:54.577 [2024-11-18 00:51:28.731120] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:54.577 passed 00:08:54.577 Test: test_nvmf_nqn_is_valid ...passed 00:08:54.577 Test: test_nvmf_ns_reservation_restore ...passed 00:08:54.577 Test: test_nvmf_subsystem_state_change ...[2024-11-18 00:51:28.731221] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:54.577 [2024-11-18 00:51:28.731286] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:63d5bb8e-7c17-4e36-8d5a-bd34476c277": uuid is not the correct length 00:08:54.577 [2024-11-18 00:51:28.731329] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:54.577 [2024-11-18 00:51:28.731455] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:54.577 passed 00:08:54.577 Test: test_nvmf_reservation_custom_ops ...passed 00:08:54.577 00:08:54.577 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.577 suites 1 1 n/a 0 0 00:08:54.577 tests 22 22 22 0 0 00:08:54.577 asserts 407 407 407 0 n/a 00:08:54.577 00:08:54.577 Elapsed time = 0.010 seconds 00:08:54.577 00:51:28 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:54.577 00:08:54.577 00:08:54.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.577 http://cunit.sourceforge.net/ 00:08:54.577 00:08:54.577 00:08:54.577 Suite: nvmf 00:08:54.577 Test: test_nvmf_tcp_create ...[2024-11-18 00:51:28.816841] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:54.577 passed 00:08:54.577 Test: test_nvmf_tcp_destroy ...passed 00:08:54.577 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:54.577 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:54.577 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:54.577 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:54.577 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:54.577 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-18 00:51:28.960732] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.577 [2024-11-18 00:51:28.960825] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.577 passed 00:08:54.577 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:54.577 Test: test_nvmf_tcp_icreq_handle ...[2024-11-18 00:51:28.960937] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.577 [2024-11-18 00:51:28.960994] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.577 [2024-11-18 00:51:28.961031] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.577 [2024-11-18 00:51:28.961158] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:54.577 [2024-11-18 00:51:28.961285] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.577 [2024-11-18 00:51:28.961372] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.577 [2024-11-18 00:51:28.961418] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:54.577 [2024-11-18 00:51:28.961472] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.577 [2024-11-18 00:51:28.961522] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.577 passed 00:08:54.577 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:54.577 Test: test_nvmf_tcp_invalid_sgl ...passed 00:08:54.577 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-18 00:51:28.961583] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.577 [2024-11-18 00:51:28.961633] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:54.577 [2024-11-18 00:51:28.961705] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.577 [2024-11-18 00:51:28.961798] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:54.577 [2024-11-18 00:51:28.961856] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.961899] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e2e0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.961956] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffcf905f040 00:08:54.578 [2024-11-18 00:51:28.962066] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.962147] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.962207] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffcf905e7a0 00:08:54.578 [2024-11-18 00:51:28.962257] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.962315] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.962371] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:54.578 [2024-11-18 00:51:28.962430] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.962493] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.962547] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:54.578 [2024-11-18 00:51:28.962593] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.962646] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.962693] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.962744] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.962825] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.962869] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.962934] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.962969] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.963033] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 passed 00:08:54.578 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-18 00:51:28.963077] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.963159] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.963204] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.578 [2024-11-18 00:51:28.963260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:54.578 [2024-11-18 00:51:28.963304] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf905e7a0 is same with the state(5) to be set 00:08:54.992 passed 00:08:54.992 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-18 00:51:28.995553] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:54.992 [2024-11-18 00:51:28.995667] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:54.992 passed 00:08:54.992 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed[2024-11-18 00:51:28.996134] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:54.992 [2024-11-18 00:51:28.996193] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:54.992 00:08:54.992 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:08:54.992 00:08:54.992 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.992 suites 1 1 n/a 0 0 00:08:54.992 tests 17 17 17 0 0 00:08:54.992 asserts 222 222 222 0 n/a 00:08:54.992 00:08:54.992 Elapsed time = 0.213 seconds 00:08:54.992 [2024-11-18 00:51:28.996456] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:54.992 [2024-11-18 00:51:28.996513] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:54.992 00:51:29 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:54.992 00:08:54.992 00:08:54.992 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.992 http://cunit.sourceforge.net/ 00:08:54.992 00:08:54.992 00:08:54.992 Suite: nvmf 00:08:54.992 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:54.992 00:08:54.992 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.992 suites 1 1 n/a 0 0 00:08:54.992 tests 1 1 1 0 0 00:08:54.992 asserts 17 17 17 0 n/a 00:08:54.992 00:08:54.992 Elapsed time = 0.032 seconds 00:08:54.992 00:08:54.992 real 0m0.670s 00:08:54.992 user 0m0.249s 00:08:54.992 sys 0m0.423s 00:08:54.992 00:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.992 ************************************ 00:08:54.992 END TEST unittest_nvmf 00:08:54.992 ************************************ 00:08:54.992 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:54.992 00:51:29 -- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:54.992 00:51:29 -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:54.992 00:51:29 -- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:54.992 00:51:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.992 00:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.992 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:54.992 ************************************ 00:08:54.992 START TEST unittest_nvmf_rdma 00:08:54.992 ************************************ 00:08:54.992 00:51:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:54.992 00:08:54.992 00:08:54.992 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.992 http://cunit.sourceforge.net/ 00:08:54.992 00:08:54.992 00:08:54.992 Suite: nvmf 00:08:54.992 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-18 00:51:29.317682] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:54.992 [2024-11-18 00:51:29.318148] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:54.992 [2024-11-18 00:51:29.318211] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:54.992 passed 00:08:54.992 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:54.992 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:54.992 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:54.992 Test: test_nvmf_rdma_opts_init ...passed 00:08:54.992 Test: test_nvmf_rdma_request_free_data ...passed 00:08:54.992 Test: test_nvmf_rdma_update_ibv_state ...[2024-11-18 00:51:29.319909] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:08:54.992 [2024-11-18 00:51:29.319970] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:08:54.992 passed 00:08:54.992 Test: test_nvmf_rdma_resources_create ...passed 00:08:54.992 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:54.992 Test: test_nvmf_rdma_resize_cq ...[2024-11-18 00:51:29.321725] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:54.992 Using CQ of insufficient size may lead to CQ overrun 00:08:54.992 passed 00:08:54.992 00:08:54.992 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.992 suites 1 1 n/a 0 0 00:08:54.992 tests 10 10 10 0 0 00:08:54.992 asserts 584 584 584 0 n/a 00:08:54.992 00:08:54.992 Elapsed time = 0.004 seconds 00:08:54.992 [2024-11-18 00:51:29.321872] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:54.992 [2024-11-18 00:51:29.321950] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:55.251 00:08:55.251 real 0m0.058s 00:08:55.251 user 0m0.019s 00:08:55.251 sys 0m0.040s 00:08:55.251 ************************************ 00:08:55.251 END TEST unittest_nvmf_rdma 00:08:55.251 00:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.251 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.251 ************************************ 00:08:55.251 00:51:29 -- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:55.251 00:51:29 -- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi 00:08:55.251 00:51:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.251 00:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.251 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.251 ************************************ 00:08:55.251 START TEST unittest_scsi 00:08:55.251 ************************************ 00:08:55.251 00:51:29 -- common/autotest_common.sh@1114 -- # unittest_scsi 00:08:55.251 00:51:29 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:55.251 00:08:55.251 00:08:55.251 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.251 http://cunit.sourceforge.net/ 00:08:55.251 00:08:55.251 00:08:55.251 Suite: dev_suite 00:08:55.251 Test: dev_destruct_null_dev ...passed 00:08:55.251 Test: dev_destruct_zero_luns ...passed 00:08:55.252 Test: dev_destruct_null_lun ...passed 00:08:55.252 Test: dev_destruct_success ...passed 00:08:55.252 Test: dev_construct_num_luns_zero ...passed 00:08:55.252 Test: dev_construct_no_lun_zero ...passed 00:08:55.252 Test: dev_construct_null_lun ...passed 00:08:55.252 Test: dev_construct_name_too_long ...[2024-11-18 00:51:29.435131] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:55.252 [2024-11-18 00:51:29.435647] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:55.252 [2024-11-18 00:51:29.435709] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:55.252 [2024-11-18 00:51:29.435769] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:55.252 passed 00:08:55.252 Test: dev_construct_success ...passed 00:08:55.252 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:55.252 Test: dev_queue_mgmt_task_success ...passed 00:08:55.252 Test: dev_queue_task_success ...passed 00:08:55.252 Test: dev_stop_success ...passed 00:08:55.252 Test: dev_add_port_max_ports ...[2024-11-18 00:51:29.436138] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:55.252 passed 00:08:55.252 Test: dev_add_port_construct_failure1 ...passed 00:08:55.252 Test: dev_add_port_construct_failure2 ...passed 00:08:55.252 Test: dev_add_port_success1 ...passed 00:08:55.252 Test: dev_add_port_success2 ...passed 00:08:55.252 Test: dev_add_port_success3 ...passed 00:08:55.252 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:55.252 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:55.252 Test: dev_find_port_by_id_success ...passed 00:08:55.252 Test: dev_add_lun_bdev_not_found ...passed 00:08:55.252 Test: dev_add_lun_no_free_lun_id ...[2024-11-18 00:51:29.436268] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:55.252 [2024-11-18 00:51:29.436381] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:55.252 [2024-11-18 00:51:29.436929] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:55.252 passed 00:08:55.252 Test: dev_add_lun_success1 ...passed 00:08:55.252 Test: dev_add_lun_success2 ...passed 00:08:55.252 Test: dev_check_pending_tasks ...passed 00:08:55.252 Test: dev_iterate_luns ...passed 00:08:55.252 Test: dev_find_free_lun ...passed 00:08:55.252 00:08:55.252 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.252 suites 1 1 n/a 0 0 00:08:55.252 tests 29 29 29 0 0 00:08:55.252 asserts 97 97 97 0 n/a 00:08:55.252 00:08:55.252 Elapsed time = 0.003 seconds 00:08:55.252 00:51:29 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:55.252 00:08:55.252 00:08:55.252 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.252 http://cunit.sourceforge.net/ 00:08:55.252 00:08:55.252 00:08:55.252 Suite: lun_suite 00:08:55.252 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:08:55.252 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:08:55.252 Test: lun_task_mgmt_execute_lun_reset ...[2024-11-18 00:51:29.482385] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:55.252 [2024-11-18 00:51:29.482846] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:55.252 passed 00:08:55.252 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:55.252 Test: lun_task_mgmt_execute_invalid_case ...passed 00:08:55.252 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:55.252 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:55.252 Test: lun_append_task_null_lun_not_supported ...passed 00:08:55.252 Test: lun_execute_scsi_task_pending ...passed 00:08:55.252 Test: lun_execute_scsi_task_complete ...passed 00:08:55.252 Test: lun_execute_scsi_task_resize ...passed 00:08:55.252 Test: lun_destruct_success ...passed 00:08:55.252 Test: lun_construct_null_ctx ...passed 00:08:55.252 Test: lun_construct_success ...passed 00:08:55.252 Test: lun_reset_task_wait_scsi_task_complete ...[2024-11-18 00:51:29.483054] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:55.252 [2024-11-18 00:51:29.483292] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:55.252 passed 00:08:55.252 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:55.252 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:55.252 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:55.252 00:08:55.252 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.252 suites 1 1 n/a 0 0 00:08:55.252 tests 18 18 18 0 0 00:08:55.252 asserts 153 153 153 0 n/a 00:08:55.252 00:08:55.252 Elapsed time = 0.001 seconds 00:08:55.252 00:51:29 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:55.252 00:08:55.252 00:08:55.252 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.252 http://cunit.sourceforge.net/ 00:08:55.252 00:08:55.252 00:08:55.252 Suite: scsi_suite 00:08:55.252 Test: scsi_init ...passed 00:08:55.252 00:08:55.252 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.252 suites 1 1 n/a 0 0 00:08:55.252 tests 1 1 1 0 0 00:08:55.252 asserts 1 1 1 0 n/a 00:08:55.252 00:08:55.252 Elapsed time = 0.000 seconds 00:08:55.252 00:51:29 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:55.252 00:08:55.252 00:08:55.252 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.252 http://cunit.sourceforge.net/ 00:08:55.252 00:08:55.252 00:08:55.252 Suite: translation_suite 00:08:55.252 Test: mode_select_6_test ...passed 00:08:55.252 Test: mode_select_6_test2 ...passed 00:08:55.252 Test: mode_sense_6_test ...passed 00:08:55.252 Test: mode_sense_10_test ...passed 00:08:55.252 Test: inquiry_evpd_test ...passed 00:08:55.252 Test: inquiry_standard_test ...passed 00:08:55.252 Test: inquiry_overflow_test ...passed 00:08:55.252 Test: task_complete_test ...passed 00:08:55.252 Test: lba_range_test ...passed 00:08:55.252 Test: xfer_len_test ...[2024-11-18 00:51:29.562594] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:55.252 passed 00:08:55.252 Test: xfer_test ...passed 00:08:55.252 Test: scsi_name_padding_test ...passed 00:08:55.252 Test: get_dif_ctx_test ...passed 00:08:55.252 Test: unmap_split_test ...passed 00:08:55.252 00:08:55.252 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.252 suites 1 1 n/a 0 0 00:08:55.252 tests 14 14 14 0 0 00:08:55.252 asserts 1200 1200 1200 0 n/a 00:08:55.252 00:08:55.252 Elapsed time = 0.005 seconds 00:08:55.252 00:51:29 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:55.252 00:08:55.252 00:08:55.252 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.252 http://cunit.sourceforge.net/ 00:08:55.252 00:08:55.252 00:08:55.252 Suite: reservation_suite 00:08:55.252 Test: test_reservation_register ...passed 00:08:55.252 Test: test_reservation_reserve ...passed 00:08:55.252 Test: test_reservation_preempt_non_all_regs ...[2024-11-18 00:51:29.607784] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:55.252 [2024-11-18 00:51:29.608253] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:55.252 [2024-11-18 00:51:29.608346] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:55.252 [2024-11-18 00:51:29.608480] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:55.252 [2024-11-18 00:51:29.608559] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:55.252 passed 00:08:55.252 Test: test_reservation_preempt_all_regs ...passed 00:08:55.252 Test: test_reservation_cmds_conflict ...[2024-11-18 00:51:29.608647] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:55.252 [2024-11-18 00:51:29.608817] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:55.252 [2024-11-18 00:51:29.608971] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:55.252 [2024-11-18 00:51:29.609047] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:55.252 [2024-11-18 00:51:29.609110] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:55.252 [2024-11-18 00:51:29.609154] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:55.252 [2024-11-18 00:51:29.609212] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:55.253 [2024-11-18 00:51:29.609255] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:55.253 passed 00:08:55.253 Test: test_scsi2_reserve_release ...passed 00:08:55.253 Test: test_pr_with_scsi2_reserve_release ...passed 00:08:55.253 00:08:55.253 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.253 suites 1 1 n/a 0 0 00:08:55.253 tests 7 7 7 0 0 00:08:55.253 asserts 257 257 257 0 n/a 00:08:55.253 00:08:55.253 Elapsed time = 0.002 seconds 00:08:55.253 [2024-11-18 00:51:29.609379] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:55.253 00:08:55.253 real 0m0.215s 00:08:55.253 user 0m0.075s 00:08:55.253 sys 0m0.142s 00:08:55.253 ************************************ 00:08:55.253 END TEST unittest_scsi 00:08:55.253 ************************************ 00:08:55.253 00:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.253 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.512 00:51:29 -- unit/unittest.sh@252 -- # uname -s 00:08:55.512 00:51:29 -- unit/unittest.sh@252 -- # '[' Linux = Linux ']' 00:08:55.512 00:51:29 -- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock 00:08:55.512 00:51:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.512 00:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.512 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.512 ************************************ 00:08:55.512 START TEST unittest_sock 00:08:55.512 ************************************ 00:08:55.512 00:51:29 -- common/autotest_common.sh@1114 -- # unittest_sock 00:08:55.512 00:51:29 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:55.512 00:08:55.512 00:08:55.512 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.512 http://cunit.sourceforge.net/ 00:08:55.512 00:08:55.512 00:08:55.512 Suite: sock 00:08:55.512 Test: posix_sock ...passed 00:08:55.512 Test: ut_sock ...passed 00:08:55.512 Test: posix_sock_group ...passed 00:08:55.512 Test: ut_sock_group ...passed 00:08:55.512 Test: posix_sock_group_fairness ...passed 00:08:55.512 Test: _posix_sock_close ...passed 00:08:55.512 Test: sock_get_default_opts ...passed 00:08:55.512 Test: ut_sock_impl_get_set_opts ...passed 00:08:55.512 Test: posix_sock_impl_get_set_opts ...passed 00:08:55.512 Test: ut_sock_map ...passed 00:08:55.512 Test: override_impl_opts ...passed 00:08:55.512 Test: ut_sock_group_get_ctx ...passed 00:08:55.512 00:08:55.512 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.512 suites 1 1 n/a 0 0 00:08:55.512 tests 12 12 12 0 0 00:08:55.512 asserts 349 349 349 0 n/a 00:08:55.512 00:08:55.512 Elapsed time = 0.008 seconds 00:08:55.512 00:51:29 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:55.512 00:08:55.512 00:08:55.512 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.512 http://cunit.sourceforge.net/ 00:08:55.512 00:08:55.512 00:08:55.512 Suite: posix 00:08:55.512 Test: flush ...passed 00:08:55.512 00:08:55.512 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.512 suites 1 1 n/a 0 0 00:08:55.512 tests 1 1 1 0 0 00:08:55.512 asserts 28 28 28 0 n/a 00:08:55.512 00:08:55.512 Elapsed time = 0.000 seconds 00:08:55.512 00:51:29 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:55.512 00:08:55.512 real 0m0.127s 00:08:55.512 user 0m0.052s 00:08:55.512 sys 0m0.053s 00:08:55.512 ************************************ 00:08:55.512 END TEST unittest_sock 00:08:55.512 ************************************ 00:08:55.512 00:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.512 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.512 00:51:29 -- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:55.512 00:51:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.512 00:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.512 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.512 ************************************ 00:08:55.512 START TEST unittest_thread 00:08:55.512 ************************************ 00:08:55.512 00:51:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:55.772 00:08:55.772 00:08:55.772 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.772 http://cunit.sourceforge.net/ 00:08:55.772 00:08:55.772 00:08:55.772 Suite: io_channel 00:08:55.772 Test: thread_alloc ...passed 00:08:55.772 Test: thread_send_msg ...passed 00:08:55.772 Test: thread_poller ...passed 00:08:55.772 Test: poller_pause ...passed 00:08:55.772 Test: thread_for_each ...passed 00:08:55.772 Test: for_each_channel_remove ...passed 00:08:55.772 Test: for_each_channel_unreg ...[2024-11-18 00:51:29.936116] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x7ffcfbc4ae20 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:55.772 passed 00:08:55.772 Test: thread_name ...passed 00:08:55.772 Test: channel ...[2024-11-18 00:51:29.940677] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x55a1d0fb60e0 00:08:55.772 passed 00:08:55.772 Test: channel_destroy_races ...passed 00:08:55.772 Test: thread_exit_test ...[2024-11-18 00:51:29.946244] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:08:55.772 passed 00:08:55.772 Test: thread_update_stats_test ...passed 00:08:55.772 Test: nested_channel ...passed 00:08:55.772 Test: device_unregister_and_thread_exit_race ...passed 00:08:55.772 Test: cache_closest_timed_poller ...passed 00:08:55.772 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:55.772 Test: io_device_lookup ...passed 00:08:55.772 Test: spdk_spin ...[2024-11-18 00:51:29.958067] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:55.772 [2024-11-18 00:51:29.958164] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfbc4ae10 00:08:55.772 [2024-11-18 00:51:29.958291] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:55.772 [2024-11-18 00:51:29.960115] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:55.772 [2024-11-18 00:51:29.960205] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfbc4ae10 00:08:55.772 [2024-11-18 00:51:29.960251] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:55.772 [2024-11-18 00:51:29.960324] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfbc4ae10 00:08:55.772 [2024-11-18 00:51:29.960369] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:55.772 [2024-11-18 00:51:29.960426] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfbc4ae10 00:08:55.772 [2024-11-18 00:51:29.960464] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:55.772 [2024-11-18 00:51:29.960533] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfbc4ae10 00:08:55.772 passed 00:08:55.772 Test: for_each_channel_and_thread_exit_race ...passed 00:08:55.772 Test: for_each_thread_and_thread_exit_race ...passed 00:08:55.772 00:08:55.772 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.772 suites 1 1 n/a 0 0 00:08:55.772 tests 20 20 20 0 0 00:08:55.772 asserts 409 409 409 0 n/a 00:08:55.772 00:08:55.772 Elapsed time = 0.054 seconds 00:08:55.772 00:08:55.772 real 0m0.107s 00:08:55.772 user 0m0.079s 00:08:55.772 sys 0m0.029s 00:08:55.772 ************************************ 00:08:55.772 END TEST unittest_thread 00:08:55.772 ************************************ 00:08:55.772 00:51:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.772 00:51:30 -- common/autotest_common.sh@10 -- # set +x 00:08:55.772 00:51:30 -- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:55.772 00:51:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.772 00:51:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.772 00:51:30 -- common/autotest_common.sh@10 -- # set +x 00:08:55.772 ************************************ 00:08:55.772 START TEST unittest_iobuf 00:08:55.772 ************************************ 00:08:55.772 00:51:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:55.772 00:08:55.772 00:08:55.772 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.772 http://cunit.sourceforge.net/ 00:08:55.772 00:08:55.772 00:08:55.772 Suite: io_channel 00:08:55.772 Test: iobuf ...passed 00:08:55.772 Test: iobuf_cache ...[2024-11-18 00:51:30.089267] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:55.772 [2024-11-18 00:51:30.090355] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:55.772 [2024-11-18 00:51:30.090786] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:55.773 [2024-11-18 00:51:30.091369] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:55.773 [2024-11-18 00:51:30.091599] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:55.773 [2024-11-18 00:51:30.092147] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:55.773 passed 00:08:55.773 00:08:55.773 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.773 suites 1 1 n/a 0 0 00:08:55.773 tests 2 2 2 0 0 00:08:55.773 asserts 107 107 107 0 n/a 00:08:55.773 00:08:55.773 Elapsed time = 0.010 seconds 00:08:55.773 00:08:55.773 real 0m0.058s 00:08:55.773 user 0m0.014s 00:08:55.773 sys 0m0.044s 00:08:55.773 00:51:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.773 00:51:30 -- common/autotest_common.sh@10 -- # set +x 00:08:55.773 ************************************ 00:08:55.773 END TEST unittest_iobuf 00:08:55.773 ************************************ 00:08:55.773 00:51:30 -- unit/unittest.sh@257 -- # run_test unittest_util unittest_util 00:08:55.773 00:51:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.773 00:51:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.773 00:51:30 -- common/autotest_common.sh@10 -- # set +x 00:08:56.032 ************************************ 00:08:56.032 START TEST unittest_util 00:08:56.032 ************************************ 00:08:56.032 00:51:30 -- common/autotest_common.sh@1114 -- # unittest_util 00:08:56.032 00:51:30 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:56.032 00:08:56.032 00:08:56.032 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.032 http://cunit.sourceforge.net/ 00:08:56.032 00:08:56.032 00:08:56.032 Suite: base64 00:08:56.032 Test: test_base64_get_encoded_strlen ...passed 00:08:56.032 Test: test_base64_get_decoded_len ...passed 00:08:56.032 Test: test_base64_encode ...passed 00:08:56.032 Test: test_base64_decode ...passed 00:08:56.032 Test: test_base64_urlsafe_encode ...passed 00:08:56.032 Test: test_base64_urlsafe_decode ...passed 00:08:56.032 00:08:56.032 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.032 suites 1 1 n/a 0 0 00:08:56.032 tests 6 6 6 0 0 00:08:56.032 asserts 112 112 112 0 n/a 00:08:56.032 00:08:56.032 Elapsed time = 0.000 seconds 00:08:56.032 00:51:30 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:56.032 00:08:56.032 00:08:56.032 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.032 http://cunit.sourceforge.net/ 00:08:56.032 00:08:56.032 00:08:56.032 Suite: bit_array 00:08:56.032 Test: test_1bit ...passed 00:08:56.032 Test: test_64bit ...passed 00:08:56.032 Test: test_find ...passed 00:08:56.032 Test: test_resize ...passed 00:08:56.032 Test: test_errors ...passed 00:08:56.032 Test: test_count ...passed 00:08:56.032 Test: test_mask_store_load ...passed 00:08:56.032 Test: test_mask_clear ...passed 00:08:56.032 00:08:56.032 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.032 suites 1 1 n/a 0 0 00:08:56.032 tests 8 8 8 0 0 00:08:56.032 asserts 5075 5075 5075 0 n/a 00:08:56.032 00:08:56.032 Elapsed time = 0.002 seconds 00:08:56.032 00:51:30 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:56.032 00:08:56.032 00:08:56.032 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.032 http://cunit.sourceforge.net/ 00:08:56.032 00:08:56.032 00:08:56.032 Suite: cpuset 00:08:56.032 Test: test_cpuset ...passed 00:08:56.032 Test: test_cpuset_parse ...[2024-11-18 00:51:30.282471] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:56.032 [2024-11-18 00:51:30.282882] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:56.032 [2024-11-18 00:51:30.283007] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:56.032 [2024-11-18 00:51:30.283129] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:56.032 [2024-11-18 00:51:30.283196] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:56.032 passed 00:08:56.032 Test: test_cpuset_fmt ...[2024-11-18 00:51:30.283254] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:56.032 [2024-11-18 00:51:30.283303] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:56.032 [2024-11-18 00:51:30.283373] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:56.032 passed 00:08:56.032 00:08:56.032 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.032 suites 1 1 n/a 0 0 00:08:56.032 tests 3 3 3 0 0 00:08:56.032 asserts 65 65 65 0 n/a 00:08:56.032 00:08:56.032 Elapsed time = 0.002 seconds 00:08:56.032 00:51:30 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:56.032 00:08:56.032 00:08:56.032 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.032 http://cunit.sourceforge.net/ 00:08:56.032 00:08:56.032 00:08:56.032 Suite: crc16 00:08:56.032 Test: test_crc16_t10dif ...passed 00:08:56.032 Test: test_crc16_t10dif_seed ...passed 00:08:56.032 Test: test_crc16_t10dif_copy ...passed 00:08:56.032 00:08:56.032 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.032 suites 1 1 n/a 0 0 00:08:56.032 tests 3 3 3 0 0 00:08:56.032 asserts 5 5 5 0 n/a 00:08:56.032 00:08:56.032 Elapsed time = 0.000 seconds 00:08:56.032 00:51:30 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:56.032 00:08:56.032 00:08:56.032 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.032 http://cunit.sourceforge.net/ 00:08:56.032 00:08:56.032 00:08:56.032 Suite: crc32_ieee 00:08:56.032 Test: test_crc32_ieee ...passed 00:08:56.032 00:08:56.032 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.032 suites 1 1 n/a 0 0 00:08:56.032 tests 1 1 1 0 0 00:08:56.033 asserts 1 1 1 0 n/a 00:08:56.033 00:08:56.033 Elapsed time = 0.000 seconds 00:08:56.033 00:51:30 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:56.033 00:08:56.033 00:08:56.033 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.033 http://cunit.sourceforge.net/ 00:08:56.033 00:08:56.033 00:08:56.033 Suite: crc32c 00:08:56.033 Test: test_crc32c ...passed 00:08:56.033 Test: test_crc32c_nvme ...passed 00:08:56.033 00:08:56.033 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.033 suites 1 1 n/a 0 0 00:08:56.033 tests 2 2 2 0 0 00:08:56.033 asserts 16 16 16 0 n/a 00:08:56.033 00:08:56.033 Elapsed time = 0.001 seconds 00:08:56.033 00:51:30 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:56.293 00:08:56.293 00:08:56.293 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.293 http://cunit.sourceforge.net/ 00:08:56.293 00:08:56.293 00:08:56.293 Suite: crc64 00:08:56.293 Test: test_crc64_nvme ...passed 00:08:56.293 00:08:56.293 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.293 suites 1 1 n/a 0 0 00:08:56.293 tests 1 1 1 0 0 00:08:56.293 asserts 4 4 4 0 n/a 00:08:56.293 00:08:56.293 Elapsed time = 0.000 seconds 00:08:56.293 00:51:30 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:56.293 00:08:56.293 00:08:56.293 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.293 http://cunit.sourceforge.net/ 00:08:56.293 00:08:56.293 00:08:56.293 Suite: string 00:08:56.293 Test: test_parse_ip_addr ...passed 00:08:56.293 Test: test_str_chomp ...passed 00:08:56.293 Test: test_parse_capacity ...passed 00:08:56.293 Test: test_sprintf_append_realloc ...passed 00:08:56.293 Test: test_strtol ...passed 00:08:56.293 Test: test_strtoll ...passed 00:08:56.293 Test: test_strarray ...passed 00:08:56.293 Test: test_strcpy_replace ...passed 00:08:56.293 00:08:56.293 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.293 suites 1 1 n/a 0 0 00:08:56.293 tests 8 8 8 0 0 00:08:56.293 asserts 161 161 161 0 n/a 00:08:56.293 00:08:56.293 Elapsed time = 0.001 seconds 00:08:56.293 00:51:30 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:56.293 00:08:56.293 00:08:56.293 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.293 http://cunit.sourceforge.net/ 00:08:56.293 00:08:56.293 00:08:56.293 Suite: dif 00:08:56.293 Test: dif_generate_and_verify_test ...[2024-11-18 00:51:30.517706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:56.293 [2024-11-18 00:51:30.518287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:56.293 [2024-11-18 00:51:30.518600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:56.293 [2024-11-18 00:51:30.518902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:56.293 [2024-11-18 00:51:30.519196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:56.293 [2024-11-18 00:51:30.519506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:56.293 passed 00:08:56.294 Test: dif_disable_check_test ...[2024-11-18 00:51:30.520565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:56.294 [2024-11-18 00:51:30.520963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:56.294 [2024-11-18 00:51:30.521268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:56.294 passed 00:08:56.294 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-18 00:51:30.522368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:56.294 [2024-11-18 00:51:30.522706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:56.294 [2024-11-18 00:51:30.523055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:56.294 [2024-11-18 00:51:30.523456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:56.294 [2024-11-18 00:51:30.523815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:56.294 [2024-11-18 00:51:30.524165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:56.294 [2024-11-18 00:51:30.524501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:56.294 [2024-11-18 00:51:30.524835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:56.294 [2024-11-18 00:51:30.525172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:56.294 [2024-11-18 00:51:30.525547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:56.294 [2024-11-18 00:51:30.525902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:56.294 passed 00:08:56.294 Test: dif_apptag_mask_test ...[2024-11-18 00:51:30.526269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:56.294 [2024-11-18 00:51:30.526603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:56.294 passed 00:08:56.294 Test: dif_sec_512_md_0_error_test ...[2024-11-18 00:51:30.526835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:56.294 passed 00:08:56.294 Test: dif_sec_4096_md_0_error_test ...[2024-11-18 00:51:30.526884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:56.294 [2024-11-18 00:51:30.526952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:56.294 passed 00:08:56.294 Test: dif_sec_4100_md_128_error_test ...[2024-11-18 00:51:30.527040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:56.294 [2024-11-18 00:51:30.527113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:56.294 passed 00:08:56.294 Test: dif_guard_seed_test ...passed 00:08:56.294 Test: dif_guard_value_test ...passed 00:08:56.294 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:56.294 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:56.294 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 00:51:30.572565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=dd4c, Actual=fd4c 00:08:56.294 [2024-11-18 00:51:30.575126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=de21, Actual=fe21 00:08:56.294 [2024-11-18 00:51:30.577680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.580198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.582713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.294 [2024-11-18 00:51:30.585214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.294 [2024-11-18 00:51:30.587769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=7f16 00:08:56.294 [2024-11-18 00:51:30.589963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fe21, Actual=7d98 00:08:56.294 [2024-11-18 00:51:30.592213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=3ab753ed, Actual=1ab753ed 00:08:56.294 [2024-11-18 00:51:30.594705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=18574660, Actual=38574660 00:08:56.294 [2024-11-18 00:51:30.597284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.599822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.602303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.294 [2024-11-18 00:51:30.604822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.294 [2024-11-18 00:51:30.607331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.294 [2024-11-18 00:51:30.609569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38574660, Actual=bce94d4e 00:08:56.294 [2024-11-18 00:51:30.611824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.294 [2024-11-18 00:51:30.614356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:56.294 [2024-11-18 00:51:30.616845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.619338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.621791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=205f 00:08:56.294 [2024-11-18 00:51:30.624286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=205f 00:08:56.294 [2024-11-18 00:51:30.626798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.294 [2024-11-18 00:51:30.628981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d4837a266, Actual=1895a6e9db90c51a 00:08:56.294 passed 00:08:56.294 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-18 00:51:30.630290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:56.294 [2024-11-18 00:51:30.630605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:56.294 [2024-11-18 00:51:30.630914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.631255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.294 [2024-11-18 00:51:30.631594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.294 [2024-11-18 00:51:30.631904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.294 [2024-11-18 00:51:30.632215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f16 00:08:56.295 [2024-11-18 00:51:30.632507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7d98 00:08:56.295 [2024-11-18 00:51:30.632823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:56.295 [2024-11-18 00:51:30.633141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:56.295 [2024-11-18 00:51:30.633476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.633797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.634113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.634431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.634746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.295 [2024-11-18 00:51:30.635030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=bce94d4e 00:08:56.295 [2024-11-18 00:51:30.635370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.295 [2024-11-18 00:51:30.635683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:56.295 [2024-11-18 00:51:30.636003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.636292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.636606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.295 [2024-11-18 00:51:30.636914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.295 [2024-11-18 00:51:30.637246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.295 [2024-11-18 00:51:30.637552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1895a6e9db90c51a 00:08:56.295 passed 00:08:56.295 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-18 00:51:30.637911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:56.295 [2024-11-18 00:51:30.638244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:56.295 [2024-11-18 00:51:30.638554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.638873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.639207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.639523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.639837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f16 00:08:56.295 [2024-11-18 00:51:30.640130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7d98 00:08:56.295 [2024-11-18 00:51:30.640433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:56.295 [2024-11-18 00:51:30.640750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:56.295 [2024-11-18 00:51:30.641062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.641372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.641688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.642003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.642306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.295 [2024-11-18 00:51:30.642603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=bce94d4e 00:08:56.295 [2024-11-18 00:51:30.642924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.295 [2024-11-18 00:51:30.643250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:56.295 [2024-11-18 00:51:30.643571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.643880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.644200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.295 [2024-11-18 00:51:30.644508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.295 [2024-11-18 00:51:30.644833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.295 [2024-11-18 00:51:30.645125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1895a6e9db90c51a 00:08:56.295 passed 00:08:56.295 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-18 00:51:30.645484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:56.295 [2024-11-18 00:51:30.645816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:56.295 [2024-11-18 00:51:30.646136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.646437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.646783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.647105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.647422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f16 00:08:56.295 [2024-11-18 00:51:30.647719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7d98 00:08:56.295 [2024-11-18 00:51:30.648020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:56.295 [2024-11-18 00:51:30.648328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:56.295 [2024-11-18 00:51:30.648662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.648978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.649280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.649596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.295 [2024-11-18 00:51:30.649909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.295 [2024-11-18 00:51:30.650222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=bce94d4e 00:08:56.295 [2024-11-18 00:51:30.650526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.295 [2024-11-18 00:51:30.650836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:56.295 [2024-11-18 00:51:30.651159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.651484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.295 [2024-11-18 00:51:30.651799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.295 [2024-11-18 00:51:30.652108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.295 [2024-11-18 00:51:30.652432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.295 [2024-11-18 00:51:30.652735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1895a6e9db90c51a 00:08:56.295 passed 00:08:56.295 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-18 00:51:30.653088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:56.295 [2024-11-18 00:51:30.653399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:56.296 [2024-11-18 00:51:30.653707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.654021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.654368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.654678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.655013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f16 00:08:56.296 [2024-11-18 00:51:30.655326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7d98 00:08:56.296 passed 00:08:56.296 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-18 00:51:30.655687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:56.296 [2024-11-18 00:51:30.655993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:56.296 [2024-11-18 00:51:30.656329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.656637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.656946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.657258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.657569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.296 [2024-11-18 00:51:30.657868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=bce94d4e 00:08:56.296 [2024-11-18 00:51:30.658219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.296 [2024-11-18 00:51:30.658540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:56.296 [2024-11-18 00:51:30.658865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.659203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.659515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.296 [2024-11-18 00:51:30.659835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.296 [2024-11-18 00:51:30.660153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.296 [2024-11-18 00:51:30.660456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1895a6e9db90c51a 00:08:56.296 passed 00:08:56.296 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-18 00:51:30.660805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:56.296 [2024-11-18 00:51:30.661124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:56.296 [2024-11-18 00:51:30.661433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.661754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.662093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.662406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.662735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f16 00:08:56.296 [2024-11-18 00:51:30.663031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7d98 00:08:56.296 passed 00:08:56.296 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-18 00:51:30.663395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:56.296 [2024-11-18 00:51:30.663709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:56.296 [2024-11-18 00:51:30.664034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.664352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.664666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.664980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:56.296 [2024-11-18 00:51:30.665288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.296 [2024-11-18 00:51:30.665580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=bce94d4e 00:08:56.296 [2024-11-18 00:51:30.665927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.296 [2024-11-18 00:51:30.666265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:56.296 [2024-11-18 00:51:30.666586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.666897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:56.296 [2024-11-18 00:51:30.667214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.296 [2024-11-18 00:51:30.667526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:56.296 [2024-11-18 00:51:30.667851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.296 [2024-11-18 00:51:30.668157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1895a6e9db90c51a 00:08:56.296 passed 00:08:56.296 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:56.296 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:56.296 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:56.557 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:56.557 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:56.557 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:56.557 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:56.557 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:56.557 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:56.557 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 00:51:30.712828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=dd4c, Actual=fd4c 00:08:56.557 [2024-11-18 00:51:30.713972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a0a9, Actual=80a9 00:08:56.557 [2024-11-18 00:51:30.715095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.716211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.717340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.557 [2024-11-18 00:51:30.718459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.557 [2024-11-18 00:51:30.719581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=7f16 00:08:56.557 [2024-11-18 00:51:30.720674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=86e3 00:08:56.557 [2024-11-18 00:51:30.721783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=3ab753ed, Actual=1ab753ed 00:08:56.557 [2024-11-18 00:51:30.722909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=d1756f2f, Actual=f1756f2f 00:08:56.557 [2024-11-18 00:51:30.724034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.725173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.726290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.557 [2024-11-18 00:51:30.727419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.557 [2024-11-18 00:51:30.728528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.557 [2024-11-18 00:51:30.729641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=3cb54f34 00:08:56.557 [2024-11-18 00:51:30.730757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.557 [2024-11-18 00:51:30.731911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=4ec2a02ce0d0eac0, Actual=4ec2a02cc0d0eac0 00:08:56.557 [2024-11-18 00:51:30.733020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.734158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.735279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=205f 00:08:56.557 [2024-11-18 00:51:30.736404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=205f 00:08:56.557 [2024-11-18 00:51:30.737520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.557 [2024-11-18 00:51:30.738676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=cdc3d7dcc330ddd6 00:08:56.557 passed 00:08:56.557 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 00:51:30.739062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=dd4c, Actual=fd4c 00:08:56.557 [2024-11-18 00:51:30.739348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3733, Actual=1733 00:08:56.557 [2024-11-18 00:51:30.739629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.739913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.740218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.557 [2024-11-18 00:51:30.740532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.557 [2024-11-18 00:51:30.740822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7f16 00:08:56.557 [2024-11-18 00:51:30.741100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=1179 00:08:56.557 [2024-11-18 00:51:30.741378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3ab753ed, Actual=1ab753ed 00:08:56.557 [2024-11-18 00:51:30.741656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f2c37b58, Actual=d2c37b58 00:08:56.557 [2024-11-18 00:51:30.741950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.742257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.742543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.557 [2024-11-18 00:51:30.742820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.557 [2024-11-18 00:51:30.743096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.557 [2024-11-18 00:51:30.743380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=1f035b43 00:08:56.557 [2024-11-18 00:51:30.743683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.557 [2024-11-18 00:51:30.743943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5bbd3b5ff9036505, Actual=5bbd3b5fd9036505 00:08:56.557 [2024-11-18 00:51:30.744223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.744511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.557 [2024-11-18 00:51:30.744807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:56.557 [2024-11-18 00:51:30.745069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:56.557 [2024-11-18 00:51:30.745373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.557 [2024-11-18 00:51:30.745654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=d8bc4cafdae35213 00:08:56.557 passed 00:08:56.557 Test: dix_sec_512_md_0_error ...[2024-11-18 00:51:30.745746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:56.557 passed 00:08:56.557 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:56.557 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:56.558 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:56.558 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:56.558 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:56.558 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:56.558 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:56.558 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:56.558 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:56.558 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 00:51:30.789749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=dd4c, Actual=fd4c 00:08:56.558 [2024-11-18 00:51:30.790901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a0a9, Actual=80a9 00:08:56.558 [2024-11-18 00:51:30.792021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.793126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.794265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.558 [2024-11-18 00:51:30.795403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.558 [2024-11-18 00:51:30.796501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=7f16 00:08:56.558 [2024-11-18 00:51:30.797639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=86e3 00:08:56.558 [2024-11-18 00:51:30.798765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=3ab753ed, Actual=1ab753ed 00:08:56.558 [2024-11-18 00:51:30.799883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=d1756f2f, Actual=f1756f2f 00:08:56.558 [2024-11-18 00:51:30.801004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.802111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.803251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.558 [2024-11-18 00:51:30.804373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=2000005f 00:08:56.558 [2024-11-18 00:51:30.805474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.558 [2024-11-18 00:51:30.806607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=3cb54f34 00:08:56.558 [2024-11-18 00:51:30.807755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.558 [2024-11-18 00:51:30.808867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=4ec2a02ce0d0eac0, Actual=4ec2a02cc0d0eac0 00:08:56.558 [2024-11-18 00:51:30.809990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.811116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.812243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=205f 00:08:56.558 [2024-11-18 00:51:30.813374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=205f 00:08:56.558 [2024-11-18 00:51:30.814517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.558 [2024-11-18 00:51:30.815633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=cdc3d7dcc330ddd6 00:08:56.558 passed 00:08:56.558 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 00:51:30.816016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=dd4c, Actual=fd4c 00:08:56.558 [2024-11-18 00:51:30.816303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3733, Actual=1733 00:08:56.558 [2024-11-18 00:51:30.816571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.816869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.817174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.558 [2024-11-18 00:51:30.817461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.558 [2024-11-18 00:51:30.817778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7f16 00:08:56.558 [2024-11-18 00:51:30.818054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=1179 00:08:56.558 [2024-11-18 00:51:30.818343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3ab753ed, Actual=1ab753ed 00:08:56.558 [2024-11-18 00:51:30.818634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f2c37b58, Actual=d2c37b58 00:08:56.558 [2024-11-18 00:51:30.818920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.819221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.819481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.558 [2024-11-18 00:51:30.819759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:56.558 [2024-11-18 00:51:30.820035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=3b3ec17c 00:08:56.558 [2024-11-18 00:51:30.820321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=1f035b43 00:08:56.558 [2024-11-18 00:51:30.820606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:56.558 [2024-11-18 00:51:30.820881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5bbd3b5ff9036505, Actual=5bbd3b5fd9036505 00:08:56.558 [2024-11-18 00:51:30.821141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.821426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:56.558 [2024-11-18 00:51:30.821685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:56.558 [2024-11-18 00:51:30.821968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:56.558 [2024-11-18 00:51:30.822253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=7093fed8ec258050 00:08:56.558 [2024-11-18 00:51:30.822528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=d8bc4cafdae35213 00:08:56.558 passed 00:08:56.558 Test: set_md_interleave_iovs_test ...passed 00:08:56.558 Test: set_md_interleave_iovs_split_test ...passed 00:08:56.558 Test: dif_generate_stream_pi_16_test ...passed 00:08:56.558 Test: dif_generate_stream_test ...passed 00:08:56.558 Test: set_md_interleave_iovs_alignment_test ...passed 00:08:56.558 Test: dif_generate_split_test ...[2024-11-18 00:51:30.830374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:56.558 passed 00:08:56.558 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:56.558 Test: dif_verify_split_test ...passed 00:08:56.558 Test: dif_verify_stream_multi_segments_test ...passed 00:08:56.558 Test: update_crc32c_pi_16_test ...passed 00:08:56.558 Test: update_crc32c_test ...passed 00:08:56.558 Test: dif_update_crc32c_split_test ...passed 00:08:56.558 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:56.558 Test: get_range_with_md_test ...passed 00:08:56.558 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:56.558 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:56.558 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:56.558 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:56.558 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:56.558 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:56.558 Test: dif_generate_and_verify_unmap_test ...passed 00:08:56.558 00:08:56.558 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.558 suites 1 1 n/a 0 0 00:08:56.558 tests 79 79 79 0 0 00:08:56.558 asserts 3584 3584 3584 0 n/a 00:08:56.558 00:08:56.558 Elapsed time = 0.359 seconds 00:08:56.558 00:51:30 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:56.558 00:08:56.558 00:08:56.558 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.558 http://cunit.sourceforge.net/ 00:08:56.558 00:08:56.558 00:08:56.558 Suite: iov 00:08:56.558 Test: test_single_iov ...passed 00:08:56.558 Test: test_simple_iov ...passed 00:08:56.558 Test: test_complex_iov ...passed 00:08:56.558 Test: test_iovs_to_buf ...passed 00:08:56.558 Test: test_buf_to_iovs ...passed 00:08:56.558 Test: test_memset ...passed 00:08:56.558 Test: test_iov_one ...passed 00:08:56.558 Test: test_iov_xfer ...passed 00:08:56.558 00:08:56.558 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.558 suites 1 1 n/a 0 0 00:08:56.558 tests 8 8 8 0 0 00:08:56.558 asserts 156 156 156 0 n/a 00:08:56.558 00:08:56.558 Elapsed time = 0.000 seconds 00:08:56.558 00:51:30 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:56.558 00:08:56.558 00:08:56.558 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.558 http://cunit.sourceforge.net/ 00:08:56.558 00:08:56.558 00:08:56.558 Suite: math 00:08:56.558 Test: test_serial_number_arithmetic ...passed 00:08:56.558 Suite: erase 00:08:56.558 Test: test_memset_s ...passed 00:08:56.558 00:08:56.558 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.558 suites 2 2 n/a 0 0 00:08:56.558 tests 2 2 2 0 0 00:08:56.558 asserts 18 18 18 0 n/a 00:08:56.558 00:08:56.558 Elapsed time = 0.000 seconds 00:08:56.818 00:51:30 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:56.818 00:08:56.818 00:08:56.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.818 http://cunit.sourceforge.net/ 00:08:56.818 00:08:56.818 00:08:56.818 Suite: pipe 00:08:56.818 Test: test_create_destroy ...passed 00:08:56.818 Test: test_write_get_buffer ...passed 00:08:56.818 Test: test_write_advance ...passed 00:08:56.818 Test: test_read_get_buffer ...passed 00:08:56.818 Test: test_read_advance ...passed 00:08:56.818 Test: test_data ...passed 00:08:56.818 00:08:56.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.818 suites 1 1 n/a 0 0 00:08:56.818 tests 6 6 6 0 0 00:08:56.818 asserts 250 250 250 0 n/a 00:08:56.818 00:08:56.818 Elapsed time = 0.000 seconds 00:08:56.818 00:51:31 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:56.818 00:08:56.818 00:08:56.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.818 http://cunit.sourceforge.net/ 00:08:56.818 00:08:56.818 00:08:56.818 Suite: xor 00:08:56.818 Test: test_xor_gen ...passed 00:08:56.818 00:08:56.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.818 suites 1 1 n/a 0 0 00:08:56.818 tests 1 1 1 0 0 00:08:56.818 asserts 17 17 17 0 n/a 00:08:56.818 00:08:56.818 Elapsed time = 0.007 seconds 00:08:56.818 00:08:56.818 real 0m0.873s 00:08:56.818 user 0m0.616s 00:08:56.818 sys 0m0.262s 00:08:56.818 00:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.818 00:51:31 -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 ************************************ 00:08:56.818 END TEST unittest_util 00:08:56.818 ************************************ 00:08:56.818 00:51:31 -- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:56.818 00:51:31 -- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:56.818 00:51:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.818 00:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.818 00:51:31 -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 ************************************ 00:08:56.818 START TEST unittest_vhost 00:08:56.818 ************************************ 00:08:56.818 00:51:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:56.818 00:08:56.818 00:08:56.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.818 http://cunit.sourceforge.net/ 00:08:56.818 00:08:56.818 00:08:56.818 Suite: vhost_suite 00:08:56.818 Test: desc_to_iov_test ...[2024-11-18 00:51:31.158900] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:56.818 passed 00:08:56.818 Test: create_controller_test ...[2024-11-18 00:51:31.163790] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:56.818 [2024-11-18 00:51:31.163940] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:56.818 [2024-11-18 00:51:31.164076] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:56.818 [2024-11-18 00:51:31.164183] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:56.818 [2024-11-18 00:51:31.164257] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:56.818 [2024-11-18 00:51:31.164383] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-11-18 00:51:31.165525] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:56.818 passed 00:08:56.818 Test: session_find_by_vid_test ...passed 00:08:56.818 Test: remove_controller_test ...[2024-11-18 00:51:31.167958] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:56.818 passed 00:08:56.818 Test: vq_avail_ring_get_test ...passed 00:08:56.818 Test: vq_packed_ring_test ...passed 00:08:56.818 Test: vhost_blk_construct_test ...passed 00:08:56.818 00:08:56.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.818 suites 1 1 n/a 0 0 00:08:56.818 tests 7 7 7 0 0 00:08:56.818 asserts 145 145 145 0 n/a 00:08:56.818 00:08:56.818 Elapsed time = 0.013 seconds 00:08:56.818 00:08:56.818 real 0m0.064s 00:08:56.818 user 0m0.036s 00:08:56.818 sys 0m0.029s 00:08:56.818 00:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.818 00:51:31 -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 ************************************ 00:08:56.818 END TEST unittest_vhost 00:08:56.818 ************************************ 00:08:57.078 00:51:31 -- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:57.078 00:51:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.078 00:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.078 00:51:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.078 ************************************ 00:08:57.078 START TEST unittest_dma 00:08:57.078 ************************************ 00:08:57.078 00:51:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:57.078 00:08:57.078 00:08:57.078 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.078 http://cunit.sourceforge.net/ 00:08:57.078 00:08:57.078 00:08:57.078 Suite: dma_suite 00:08:57.078 Test: test_dma ...[2024-11-18 00:51:31.280450] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:57.078 passed 00:08:57.078 00:08:57.078 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.078 suites 1 1 n/a 0 0 00:08:57.078 tests 1 1 1 0 0 00:08:57.078 asserts 50 50 50 0 n/a 00:08:57.078 00:08:57.078 Elapsed time = 0.001 seconds 00:08:57.078 00:08:57.078 real 0m0.033s 00:08:57.078 user 0m0.016s 00:08:57.078 sys 0m0.017s 00:08:57.078 00:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.078 ************************************ 00:08:57.078 END TEST unittest_dma 00:08:57.078 00:51:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.078 ************************************ 00:08:57.078 00:51:31 -- unit/unittest.sh@263 -- # run_test unittest_init unittest_init 00:08:57.078 00:51:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.078 00:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.078 00:51:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.078 ************************************ 00:08:57.078 START TEST unittest_init 00:08:57.078 ************************************ 00:08:57.078 00:51:31 -- common/autotest_common.sh@1114 -- # unittest_init 00:08:57.078 00:51:31 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:57.078 00:08:57.078 00:08:57.078 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.078 http://cunit.sourceforge.net/ 00:08:57.078 00:08:57.078 00:08:57.078 Suite: subsystem_suite 00:08:57.078 Test: subsystem_sort_test_depends_on_single ...passed 00:08:57.078 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:57.078 Test: subsystem_sort_test_missing_dependency ...[2024-11-18 00:51:31.386305] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:57.078 [2024-11-18 00:51:31.387163] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:57.078 passed 00:08:57.078 00:08:57.078 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.078 suites 1 1 n/a 0 0 00:08:57.078 tests 3 3 3 0 0 00:08:57.078 asserts 20 20 20 0 n/a 00:08:57.078 00:08:57.078 Elapsed time = 0.001 seconds 00:08:57.078 00:08:57.078 real 0m0.048s 00:08:57.078 user 0m0.026s 00:08:57.078 sys 0m0.022s 00:08:57.078 00:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.078 00:51:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.078 ************************************ 00:08:57.078 END TEST unittest_init 00:08:57.078 ************************************ 00:08:57.078 00:51:31 -- unit/unittest.sh@265 -- # [[ y == y ]] 00:08:57.078 00:51:31 -- unit/unittest.sh@266 -- # hostname 00:08:57.078 00:51:31 -- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:57.337 geninfo: WARNING: invalid characters removed from testname! 00:09:23.888 00:51:55 -- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:09:25.267 00:51:59 -- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:27.803 00:52:01 -- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:30.335 00:52:04 -- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:32.871 00:52:07 -- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:35.407 00:52:09 -- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:37.311 00:52:11 -- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:37.311 00:52:11 -- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:38.243 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:38.243 Found 309 entries. 00:09:38.243 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:38.243 Writing .css and .png files. 00:09:38.243 Generating output. 00:09:38.243 Processing file include/linux/virtio_ring.h 00:09:38.502 Processing file include/spdk/bdev_module.h 00:09:38.502 Processing file include/spdk/thread.h 00:09:38.502 Processing file include/spdk/endian.h 00:09:38.502 Processing file include/spdk/mmio.h 00:09:38.502 Processing file include/spdk/util.h 00:09:38.502 Processing file include/spdk/base64.h 00:09:38.502 Processing file include/spdk/nvme_spec.h 00:09:38.502 Processing file include/spdk/trace.h 00:09:38.502 Processing file include/spdk/nvme.h 00:09:38.502 Processing file include/spdk/histogram_data.h 00:09:38.502 Processing file include/spdk/nvmf_transport.h 00:09:38.760 Processing file include/spdk_internal/sgl.h 00:09:38.760 Processing file include/spdk_internal/virtio.h 00:09:38.760 Processing file include/spdk_internal/rdma.h 00:09:38.760 Processing file include/spdk_internal/utf.h 00:09:38.760 Processing file include/spdk_internal/sock.h 00:09:38.760 Processing file include/spdk_internal/nvme_tcp.h 00:09:38.760 Processing file lib/accel/accel_rpc.c 00:09:38.760 Processing file lib/accel/accel.c 00:09:38.760 Processing file lib/accel/accel_sw.c 00:09:39.017 Processing file lib/bdev/bdev_zone.c 00:09:39.017 Processing file lib/bdev/part.c 00:09:39.017 Processing file lib/bdev/scsi_nvme.c 00:09:39.017 Processing file lib/bdev/bdev.c 00:09:39.017 Processing file lib/bdev/bdev_rpc.c 00:09:39.276 Processing file lib/blob/blobstore.h 00:09:39.276 Processing file lib/blob/request.c 00:09:39.276 Processing file lib/blob/blob_bs_dev.c 00:09:39.276 Processing file lib/blob/zeroes.c 00:09:39.276 Processing file lib/blob/blobstore.c 00:09:39.534 Processing file lib/blobfs/blobfs.c 00:09:39.534 Processing file lib/blobfs/tree.c 00:09:39.534 Processing file lib/conf/conf.c 00:09:39.534 Processing file lib/dma/dma.c 00:09:39.793 Processing file lib/env_dpdk/pci_virtio.c 00:09:39.793 Processing file lib/env_dpdk/pci_ioat.c 00:09:39.793 Processing file lib/env_dpdk/pci_vmd.c 00:09:39.793 Processing file lib/env_dpdk/pci_dpdk.c 00:09:39.793 Processing file lib/env_dpdk/memory.c 00:09:39.793 Processing file lib/env_dpdk/env.c 00:09:39.793 Processing file lib/env_dpdk/pci_idxd.c 00:09:39.793 Processing file lib/env_dpdk/sigbus_handler.c 00:09:39.793 Processing file lib/env_dpdk/threads.c 00:09:39.793 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:39.793 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:39.793 Processing file lib/env_dpdk/init.c 00:09:39.793 Processing file lib/env_dpdk/pci_event.c 00:09:39.793 Processing file lib/env_dpdk/pci.c 00:09:40.052 Processing file lib/event/app_rpc.c 00:09:40.052 Processing file lib/event/app.c 00:09:40.052 Processing file lib/event/scheduler_static.c 00:09:40.052 Processing file lib/event/reactor.c 00:09:40.052 Processing file lib/event/log_rpc.c 00:09:40.311 Processing file lib/ftl/ftl_init.c 00:09:40.311 Processing file lib/ftl/ftl_debug.c 00:09:40.311 Processing file lib/ftl/ftl_sb.c 00:09:40.311 Processing file lib/ftl/ftl_writer.h 00:09:40.311 Processing file lib/ftl/ftl_l2p_cache.c 00:09:40.311 Processing file lib/ftl/ftl_trace.c 00:09:40.311 Processing file lib/ftl/ftl_band.h 00:09:40.311 Processing file lib/ftl/ftl_debug.h 00:09:40.311 Processing file lib/ftl/ftl_p2l.c 00:09:40.311 Processing file lib/ftl/ftl_nv_cache.h 00:09:40.311 Processing file lib/ftl/ftl_rq.c 00:09:40.311 Processing file lib/ftl/ftl_l2p.c 00:09:40.311 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:40.311 Processing file lib/ftl/ftl_core.h 00:09:40.311 Processing file lib/ftl/ftl_l2p_flat.c 00:09:40.311 Processing file lib/ftl/ftl_reloc.c 00:09:40.311 Processing file lib/ftl/ftl_io.c 00:09:40.311 Processing file lib/ftl/ftl_writer.c 00:09:40.311 Processing file lib/ftl/ftl_core.c 00:09:40.311 Processing file lib/ftl/ftl_io.h 00:09:40.311 Processing file lib/ftl/ftl_nv_cache.c 00:09:40.311 Processing file lib/ftl/ftl_layout.c 00:09:40.311 Processing file lib/ftl/ftl_band.c 00:09:40.311 Processing file lib/ftl/ftl_band_ops.c 00:09:40.569 Processing file lib/ftl/base/ftl_base_dev.c 00:09:40.569 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:40.829 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:40.829 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:40.829 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:40.829 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:40.829 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:40.829 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:40.829 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:41.087 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:41.087 Processing file lib/ftl/utils/ftl_property.h 00:09:41.087 Processing file lib/ftl/utils/ftl_mempool.c 00:09:41.087 Processing file lib/ftl/utils/ftl_property.c 00:09:41.087 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:41.087 Processing file lib/ftl/utils/ftl_conf.c 00:09:41.087 Processing file lib/ftl/utils/ftl_df.h 00:09:41.087 Processing file lib/ftl/utils/ftl_md.c 00:09:41.087 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:41.087 Processing file lib/idxd/idxd_user.c 00:09:41.087 Processing file lib/idxd/idxd_internal.h 00:09:41.087 Processing file lib/idxd/idxd.c 00:09:41.346 Processing file lib/init/rpc.c 00:09:41.346 Processing file lib/init/subsystem.c 00:09:41.346 Processing file lib/init/subsystem_rpc.c 00:09:41.346 Processing file lib/init/json_config.c 00:09:41.346 Processing file lib/ioat/ioat.c 00:09:41.346 Processing file lib/ioat/ioat_internal.h 00:09:41.605 Processing file lib/iscsi/iscsi.c 00:09:41.605 Processing file lib/iscsi/task.c 00:09:41.605 Processing file lib/iscsi/iscsi.h 00:09:41.605 Processing file lib/iscsi/param.c 00:09:41.605 Processing file lib/iscsi/tgt_node.c 00:09:41.605 Processing file lib/iscsi/iscsi_subsystem.c 00:09:41.605 Processing file lib/iscsi/iscsi_rpc.c 00:09:41.605 Processing file lib/iscsi/md5.c 00:09:41.605 Processing file lib/iscsi/task.h 00:09:41.605 Processing file lib/iscsi/init_grp.c 00:09:41.605 Processing file lib/iscsi/portal_grp.c 00:09:41.605 Processing file lib/iscsi/conn.c 00:09:41.864 Processing file lib/json/json_parse.c 00:09:41.864 Processing file lib/json/json_util.c 00:09:41.864 Processing file lib/json/json_write.c 00:09:41.864 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:41.864 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:41.864 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:41.864 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:41.864 Processing file lib/log/log_flags.c 00:09:41.864 Processing file lib/log/log.c 00:09:41.864 Processing file lib/log/log_deprecated.c 00:09:42.122 Processing file lib/lvol/lvol.c 00:09:42.122 Processing file lib/nbd/nbd_rpc.c 00:09:42.122 Processing file lib/nbd/nbd.c 00:09:42.122 Processing file lib/notify/notify_rpc.c 00:09:42.122 Processing file lib/notify/notify.c 00:09:42.689 Processing file lib/nvme/nvme_transport.c 00:09:42.690 Processing file lib/nvme/nvme_ns.c 00:09:42.690 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:42.690 Processing file lib/nvme/nvme_tcp.c 00:09:42.690 Processing file lib/nvme/nvme_cuse.c 00:09:42.690 Processing file lib/nvme/nvme_pcie_common.c 00:09:42.690 Processing file lib/nvme/nvme_ns_cmd.c 00:09:42.690 Processing file lib/nvme/nvme_poll_group.c 00:09:42.690 Processing file lib/nvme/nvme_qpair.c 00:09:42.690 Processing file lib/nvme/nvme_vfio_user.c 00:09:42.690 Processing file lib/nvme/nvme_rdma.c 00:09:42.690 Processing file lib/nvme/nvme_internal.h 00:09:42.690 Processing file lib/nvme/nvme.c 00:09:42.690 Processing file lib/nvme/nvme_opal.c 00:09:42.690 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:42.690 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:42.690 Processing file lib/nvme/nvme_pcie_internal.h 00:09:42.690 Processing file lib/nvme/nvme_discovery.c 00:09:42.690 Processing file lib/nvme/nvme_fabric.c 00:09:42.690 Processing file lib/nvme/nvme_pcie.c 00:09:42.690 Processing file lib/nvme/nvme_io_msg.c 00:09:42.690 Processing file lib/nvme/nvme_quirks.c 00:09:42.690 Processing file lib/nvme/nvme_ctrlr.c 00:09:42.690 Processing file lib/nvme/nvme_zns.c 00:09:43.258 Processing file lib/nvmf/nvmf_rpc.c 00:09:43.258 Processing file lib/nvmf/ctrlr_bdev.c 00:09:43.258 Processing file lib/nvmf/nvmf_internal.h 00:09:43.258 Processing file lib/nvmf/ctrlr_discovery.c 00:09:43.258 Processing file lib/nvmf/nvmf.c 00:09:43.258 Processing file lib/nvmf/ctrlr.c 00:09:43.258 Processing file lib/nvmf/subsystem.c 00:09:43.258 Processing file lib/nvmf/tcp.c 00:09:43.258 Processing file lib/nvmf/rdma.c 00:09:43.258 Processing file lib/nvmf/transport.c 00:09:43.258 Processing file lib/rdma/common.c 00:09:43.258 Processing file lib/rdma/rdma_verbs.c 00:09:43.258 Processing file lib/rpc/rpc.c 00:09:43.517 Processing file lib/scsi/dev.c 00:09:43.517 Processing file lib/scsi/port.c 00:09:43.517 Processing file lib/scsi/lun.c 00:09:43.517 Processing file lib/scsi/scsi_bdev.c 00:09:43.517 Processing file lib/scsi/scsi_rpc.c 00:09:43.517 Processing file lib/scsi/scsi.c 00:09:43.517 Processing file lib/scsi/task.c 00:09:43.517 Processing file lib/scsi/scsi_pr.c 00:09:43.517 Processing file lib/sock/sock_rpc.c 00:09:43.517 Processing file lib/sock/sock.c 00:09:43.776 Processing file lib/thread/thread.c 00:09:43.776 Processing file lib/thread/iobuf.c 00:09:43.776 Processing file lib/trace/trace_flags.c 00:09:43.776 Processing file lib/trace/trace_rpc.c 00:09:43.776 Processing file lib/trace/trace.c 00:09:43.776 Processing file lib/trace_parser/trace.cpp 00:09:44.036 Processing file lib/ut/ut.c 00:09:44.036 Processing file lib/ut_mock/mock.c 00:09:44.296 Processing file lib/util/cpuset.c 00:09:44.296 Processing file lib/util/crc16.c 00:09:44.296 Processing file lib/util/file.c 00:09:44.296 Processing file lib/util/zipf.c 00:09:44.296 Processing file lib/util/xor.c 00:09:44.296 Processing file lib/util/crc32_ieee.c 00:09:44.296 Processing file lib/util/bit_array.c 00:09:44.296 Processing file lib/util/math.c 00:09:44.296 Processing file lib/util/crc32.c 00:09:44.296 Processing file lib/util/hexlify.c 00:09:44.296 Processing file lib/util/dif.c 00:09:44.296 Processing file lib/util/base64.c 00:09:44.296 Processing file lib/util/pipe.c 00:09:44.296 Processing file lib/util/fd.c 00:09:44.296 Processing file lib/util/uuid.c 00:09:44.296 Processing file lib/util/iov.c 00:09:44.296 Processing file lib/util/crc64.c 00:09:44.296 Processing file lib/util/strerror_tls.c 00:09:44.296 Processing file lib/util/string.c 00:09:44.296 Processing file lib/util/fd_group.c 00:09:44.296 Processing file lib/util/crc32c.c 00:09:44.296 Processing file lib/vfio_user/host/vfio_user.c 00:09:44.296 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:44.555 Processing file lib/vhost/rte_vhost_user.c 00:09:44.555 Processing file lib/vhost/vhost.c 00:09:44.555 Processing file lib/vhost/vhost_rpc.c 00:09:44.555 Processing file lib/vhost/vhost_blk.c 00:09:44.555 Processing file lib/vhost/vhost_internal.h 00:09:44.555 Processing file lib/vhost/vhost_scsi.c 00:09:44.816 Processing file lib/virtio/virtio.c 00:09:44.816 Processing file lib/virtio/virtio_vfio_user.c 00:09:44.816 Processing file lib/virtio/virtio_pci.c 00:09:44.816 Processing file lib/virtio/virtio_vhost_user.c 00:09:44.816 Processing file lib/vmd/led.c 00:09:44.816 Processing file lib/vmd/vmd.c 00:09:44.816 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:44.816 Processing file module/accel/dsa/accel_dsa.c 00:09:44.816 Processing file module/accel/error/accel_error.c 00:09:44.816 Processing file module/accel/error/accel_error_rpc.c 00:09:45.076 Processing file module/accel/iaa/accel_iaa.c 00:09:45.076 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:45.076 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:45.076 Processing file module/accel/ioat/accel_ioat.c 00:09:45.076 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:45.076 Processing file module/bdev/aio/bdev_aio.c 00:09:45.334 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:45.335 Processing file module/bdev/delay/vbdev_delay.c 00:09:45.335 Processing file module/bdev/error/vbdev_error.c 00:09:45.335 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:45.335 Processing file module/bdev/ftl/bdev_ftl.c 00:09:45.335 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:45.593 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:45.593 Processing file module/bdev/gpt/gpt.h 00:09:45.593 Processing file module/bdev/gpt/gpt.c 00:09:45.593 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:45.593 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:45.593 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:45.593 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:45.851 Processing file module/bdev/malloc/bdev_malloc.c 00:09:45.851 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:45.851 Processing file module/bdev/null/bdev_null.c 00:09:45.851 Processing file module/bdev/null/bdev_null_rpc.c 00:09:46.110 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:46.110 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:46.110 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:46.110 Processing file module/bdev/nvme/vbdev_opal.c 00:09:46.110 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:46.110 Processing file module/bdev/nvme/bdev_nvme.c 00:09:46.110 Processing file module/bdev/nvme/nvme_rpc.c 00:09:46.369 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:46.369 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:46.369 Processing file module/bdev/raid/concat.c 00:09:46.369 Processing file module/bdev/raid/raid1.c 00:09:46.369 Processing file module/bdev/raid/raid0.c 00:09:46.369 Processing file module/bdev/raid/raid5f.c 00:09:46.369 Processing file module/bdev/raid/bdev_raid.h 00:09:46.369 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:46.369 Processing file module/bdev/raid/bdev_raid.c 00:09:46.369 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:46.627 Processing file module/bdev/split/vbdev_split.c 00:09:46.627 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:46.627 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:46.627 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:46.627 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:46.887 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:46.887 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:46.887 Processing file module/blob/bdev/blob_bdev.c 00:09:46.887 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:46.887 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:46.887 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:47.146 Processing file module/event/subsystems/accel/accel.c 00:09:47.146 Processing file module/event/subsystems/bdev/bdev.c 00:09:47.146 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:47.146 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:47.146 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:47.404 Processing file module/event/subsystems/nbd/nbd.c 00:09:47.404 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:47.404 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:47.404 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:47.664 Processing file module/event/subsystems/scsi/scsi.c 00:09:47.664 Processing file module/event/subsystems/sock/sock.c 00:09:47.664 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:47.664 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:47.923 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:47.923 Processing file module/event/subsystems/vmd/vmd.c 00:09:47.923 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:47.923 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:47.923 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:48.181 Processing file module/sock/sock_kernel.h 00:09:48.181 Processing file module/sock/posix/posix.c 00:09:48.181 Writing directory view page. 00:09:48.181 Overall coverage rate: 00:09:48.181 lines......: 39.1% (39266 of 100435 lines) 00:09:48.181 functions..: 42.8% (3587 of 8384 functions) 00:09:48.181 00:09:48.181 00:09:48.181 ===================== 00:09:48.181 All unit tests passed 00:09:48.181 ===================== 00:09:48.181 WARN: lcov not installed or SPDK built without coverage! 00:09:48.181 00:52:22 -- unit/unittest.sh@277 -- # set +x 00:09:48.181 00:09:48.181 00:09:48.181 ************************************ 00:09:48.181 END TEST unittest 00:09:48.181 ************************************ 00:09:48.181 00:09:48.181 real 3m7.699s 00:09:48.181 user 2m38.268s 00:09:48.181 sys 0m20.603s 00:09:48.181 00:52:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.181 00:52:22 -- common/autotest_common.sh@10 -- # set +x 00:09:48.181 00:52:22 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:09:48.181 00:52:22 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:09:48.181 00:52:22 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:09:48.181 00:52:22 -- spdk/autotest.sh@160 -- # timing_enter lib 00:09:48.181 00:52:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:48.181 00:52:22 -- common/autotest_common.sh@10 -- # set +x 00:09:48.181 00:52:22 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:48.181 00:52:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.181 00:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.181 00:52:22 -- common/autotest_common.sh@10 -- # set +x 00:09:48.181 ************************************ 00:09:48.181 START TEST env 00:09:48.181 ************************************ 00:09:48.181 00:52:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:48.441 * Looking for test storage... 00:09:48.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:48.441 00:52:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:48.441 00:52:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:48.441 00:52:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:48.441 00:52:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:48.441 00:52:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:48.441 00:52:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:48.441 00:52:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:48.441 00:52:22 -- scripts/common.sh@335 -- # IFS=.-: 00:09:48.441 00:52:22 -- scripts/common.sh@335 -- # read -ra ver1 00:09:48.441 00:52:22 -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.441 00:52:22 -- scripts/common.sh@336 -- # read -ra ver2 00:09:48.441 00:52:22 -- scripts/common.sh@337 -- # local 'op=<' 00:09:48.441 00:52:22 -- scripts/common.sh@339 -- # ver1_l=2 00:09:48.441 00:52:22 -- scripts/common.sh@340 -- # ver2_l=1 00:09:48.441 00:52:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:48.441 00:52:22 -- scripts/common.sh@343 -- # case "$op" in 00:09:48.441 00:52:22 -- scripts/common.sh@344 -- # : 1 00:09:48.441 00:52:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:48.441 00:52:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.441 00:52:22 -- scripts/common.sh@364 -- # decimal 1 00:09:48.441 00:52:22 -- scripts/common.sh@352 -- # local d=1 00:09:48.441 00:52:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.441 00:52:22 -- scripts/common.sh@354 -- # echo 1 00:09:48.441 00:52:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:48.441 00:52:22 -- scripts/common.sh@365 -- # decimal 2 00:09:48.441 00:52:22 -- scripts/common.sh@352 -- # local d=2 00:09:48.441 00:52:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.441 00:52:22 -- scripts/common.sh@354 -- # echo 2 00:09:48.441 00:52:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:48.441 00:52:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:48.441 00:52:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:48.441 00:52:22 -- scripts/common.sh@367 -- # return 0 00:09:48.441 00:52:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.441 00:52:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:48.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.441 --rc genhtml_branch_coverage=1 00:09:48.441 --rc genhtml_function_coverage=1 00:09:48.441 --rc genhtml_legend=1 00:09:48.441 --rc geninfo_all_blocks=1 00:09:48.441 --rc geninfo_unexecuted_blocks=1 00:09:48.441 00:09:48.441 ' 00:09:48.441 00:52:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:48.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.441 --rc genhtml_branch_coverage=1 00:09:48.441 --rc genhtml_function_coverage=1 00:09:48.441 --rc genhtml_legend=1 00:09:48.441 --rc geninfo_all_blocks=1 00:09:48.441 --rc geninfo_unexecuted_blocks=1 00:09:48.441 00:09:48.441 ' 00:09:48.441 00:52:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:48.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.441 --rc genhtml_branch_coverage=1 00:09:48.441 --rc genhtml_function_coverage=1 00:09:48.441 --rc genhtml_legend=1 00:09:48.441 --rc geninfo_all_blocks=1 00:09:48.441 --rc geninfo_unexecuted_blocks=1 00:09:48.441 00:09:48.441 ' 00:09:48.441 00:52:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:48.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.441 --rc genhtml_branch_coverage=1 00:09:48.441 --rc genhtml_function_coverage=1 00:09:48.441 --rc genhtml_legend=1 00:09:48.441 --rc geninfo_all_blocks=1 00:09:48.441 --rc geninfo_unexecuted_blocks=1 00:09:48.441 00:09:48.441 ' 00:09:48.441 00:52:22 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:48.441 00:52:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.441 00:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.441 00:52:22 -- common/autotest_common.sh@10 -- # set +x 00:09:48.441 ************************************ 00:09:48.441 START TEST env_memory 00:09:48.441 ************************************ 00:09:48.441 00:52:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:48.441 00:09:48.441 00:09:48.441 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.441 http://cunit.sourceforge.net/ 00:09:48.441 00:09:48.441 00:09:48.441 Suite: memory 00:09:48.702 Test: alloc and free memory map ...[2024-11-18 00:52:22.848157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:48.702 passed 00:09:48.702 Test: mem map translation ...[2024-11-18 00:52:22.903385] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:48.702 [2024-11-18 00:52:22.903728] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:48.702 [2024-11-18 00:52:22.903919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:48.702 [2024-11-18 00:52:22.904083] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:48.702 passed 00:09:48.702 Test: mem map registration ...[2024-11-18 00:52:22.994537] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:48.702 [2024-11-18 00:52:22.994803] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:48.702 passed 00:09:48.967 Test: mem map adjacent registrations ...passed 00:09:48.967 00:09:48.967 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.967 suites 1 1 n/a 0 0 00:09:48.967 tests 4 4 4 0 0 00:09:48.967 asserts 152 152 152 0 n/a 00:09:48.967 00:09:48.967 Elapsed time = 0.316 seconds 00:09:48.967 00:09:48.967 real 0m0.354s 00:09:48.967 user 0m0.316s 00:09:48.967 sys 0m0.037s 00:09:48.967 00:52:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.967 00:52:23 -- common/autotest_common.sh@10 -- # set +x 00:09:48.967 ************************************ 00:09:48.967 END TEST env_memory 00:09:48.967 ************************************ 00:09:48.967 00:52:23 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:48.967 00:52:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.967 00:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.967 00:52:23 -- common/autotest_common.sh@10 -- # set +x 00:09:48.967 ************************************ 00:09:48.967 START TEST env_vtophys 00:09:48.967 ************************************ 00:09:48.967 00:52:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:48.967 EAL: lib.eal log level changed from notice to debug 00:09:48.967 EAL: Detected lcore 0 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 1 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 2 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 3 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 4 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 5 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 6 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 7 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 8 as core 0 on socket 0 00:09:48.967 EAL: Detected lcore 9 as core 0 on socket 0 00:09:48.967 EAL: Maximum logical cores by configuration: 128 00:09:48.967 EAL: Detected CPU lcores: 10 00:09:48.967 EAL: Detected NUMA nodes: 1 00:09:48.967 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:09:48.967 EAL: Checking presence of .so 'librte_eal.so.23' 00:09:48.967 EAL: Checking presence of .so 'librte_eal.so' 00:09:48.967 EAL: Detected static linkage of DPDK 00:09:48.967 EAL: No shared files mode enabled, IPC will be disabled 00:09:48.967 EAL: Selected IOVA mode 'PA' 00:09:48.967 EAL: Probing VFIO support... 00:09:48.967 EAL: IOMMU type 1 (Type 1) is supported 00:09:48.967 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:48.967 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:48.967 EAL: VFIO support initialized 00:09:48.967 EAL: Ask a virtual area of 0x2e000 bytes 00:09:48.967 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:48.967 EAL: Setting up physically contiguous memory... 00:09:48.967 EAL: Setting maximum number of open files to 1048576 00:09:48.967 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:48.967 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:48.968 EAL: Ask a virtual area of 0x61000 bytes 00:09:48.968 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:48.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:48.968 EAL: Ask a virtual area of 0x400000000 bytes 00:09:48.968 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:48.968 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:48.968 EAL: Ask a virtual area of 0x61000 bytes 00:09:48.968 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:48.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:48.968 EAL: Ask a virtual area of 0x400000000 bytes 00:09:48.968 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:48.968 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:48.968 EAL: Ask a virtual area of 0x61000 bytes 00:09:48.968 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:48.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:48.968 EAL: Ask a virtual area of 0x400000000 bytes 00:09:48.968 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:48.968 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:48.968 EAL: Ask a virtual area of 0x61000 bytes 00:09:48.968 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:48.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:48.968 EAL: Ask a virtual area of 0x400000000 bytes 00:09:48.968 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:48.968 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:48.968 EAL: Hugepages will be freed exactly as allocated. 00:09:48.968 EAL: No shared files mode enabled, IPC is disabled 00:09:48.968 EAL: No shared files mode enabled, IPC is disabled 00:09:49.227 EAL: TSC frequency is ~2100000 KHz 00:09:49.227 EAL: Main lcore 0 is ready (tid=7f529d1dfa80;cpuset=[0]) 00:09:49.227 EAL: Trying to obtain current memory policy. 00:09:49.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.227 EAL: Restoring previous memory policy: 0 00:09:49.227 EAL: request: mp_malloc_sync 00:09:49.227 EAL: No shared files mode enabled, IPC is disabled 00:09:49.227 EAL: Heap on socket 0 was expanded by 2MB 00:09:49.227 EAL: No shared files mode enabled, IPC is disabled 00:09:49.227 EAL: Mem event callback 'spdk:(nil)' registered 00:09:49.227 00:09:49.227 00:09:49.227 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.227 http://cunit.sourceforge.net/ 00:09:49.227 00:09:49.227 00:09:49.227 Suite: components_suite 00:09:49.794 Test: vtophys_malloc_test ...passed 00:09:49.794 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:49.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.794 EAL: Restoring previous memory policy: 0 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was expanded by 4MB 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was shrunk by 4MB 00:09:49.794 EAL: Trying to obtain current memory policy. 00:09:49.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.794 EAL: Restoring previous memory policy: 0 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was expanded by 6MB 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was shrunk by 6MB 00:09:49.794 EAL: Trying to obtain current memory policy. 00:09:49.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.794 EAL: Restoring previous memory policy: 0 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was expanded by 10MB 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was shrunk by 10MB 00:09:49.794 EAL: Trying to obtain current memory policy. 00:09:49.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.794 EAL: Restoring previous memory policy: 0 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was expanded by 18MB 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was shrunk by 18MB 00:09:49.794 EAL: Trying to obtain current memory policy. 00:09:49.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.794 EAL: Restoring previous memory policy: 0 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was expanded by 34MB 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was shrunk by 34MB 00:09:49.794 EAL: Trying to obtain current memory policy. 00:09:49.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.794 EAL: Restoring previous memory policy: 0 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was expanded by 66MB 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was shrunk by 66MB 00:09:49.794 EAL: Trying to obtain current memory policy. 00:09:49.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.794 EAL: Restoring previous memory policy: 0 00:09:49.794 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.794 EAL: request: mp_malloc_sync 00:09:49.794 EAL: No shared files mode enabled, IPC is disabled 00:09:49.794 EAL: Heap on socket 0 was expanded by 130MB 00:09:50.054 EAL: Calling mem event callback 'spdk:(nil)' 00:09:50.054 EAL: request: mp_malloc_sync 00:09:50.054 EAL: No shared files mode enabled, IPC is disabled 00:09:50.054 EAL: Heap on socket 0 was shrunk by 130MB 00:09:50.054 EAL: Trying to obtain current memory policy. 00:09:50.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:50.054 EAL: Restoring previous memory policy: 0 00:09:50.054 EAL: Calling mem event callback 'spdk:(nil)' 00:09:50.054 EAL: request: mp_malloc_sync 00:09:50.054 EAL: No shared files mode enabled, IPC is disabled 00:09:50.054 EAL: Heap on socket 0 was expanded by 258MB 00:09:50.054 EAL: Calling mem event callback 'spdk:(nil)' 00:09:50.312 EAL: request: mp_malloc_sync 00:09:50.312 EAL: No shared files mode enabled, IPC is disabled 00:09:50.312 EAL: Heap on socket 0 was shrunk by 258MB 00:09:50.312 EAL: Trying to obtain current memory policy. 00:09:50.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:50.312 EAL: Restoring previous memory policy: 0 00:09:50.312 EAL: Calling mem event callback 'spdk:(nil)' 00:09:50.312 EAL: request: mp_malloc_sync 00:09:50.312 EAL: No shared files mode enabled, IPC is disabled 00:09:50.312 EAL: Heap on socket 0 was expanded by 514MB 00:09:50.571 EAL: Calling mem event callback 'spdk:(nil)' 00:09:50.830 EAL: request: mp_malloc_sync 00:09:50.830 EAL: No shared files mode enabled, IPC is disabled 00:09:50.830 EAL: Heap on socket 0 was shrunk by 514MB 00:09:50.830 EAL: Trying to obtain current memory policy. 00:09:50.830 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:51.088 EAL: Restoring previous memory policy: 0 00:09:51.088 EAL: Calling mem event callback 'spdk:(nil)' 00:09:51.088 EAL: request: mp_malloc_sync 00:09:51.088 EAL: No shared files mode enabled, IPC is disabled 00:09:51.088 EAL: Heap on socket 0 was expanded by 1026MB 00:09:51.347 EAL: Calling mem event callback 'spdk:(nil)' 00:09:51.636 EAL: request: mp_malloc_sync 00:09:51.636 EAL: No shared files mode enabled, IPC is disabled 00:09:51.636 passed 00:09:51.636 00:09:51.636 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:51.636 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.636 suites 1 1 n/a 0 0 00:09:51.636 tests 2 2 2 0 0 00:09:51.636 asserts 6303 6303 6303 0 n/a 00:09:51.636 00:09:51.636 Elapsed time = 2.562 seconds 00:09:51.636 EAL: Calling mem event callback 'spdk:(nil)' 00:09:51.636 EAL: request: mp_malloc_sync 00:09:51.636 EAL: No shared files mode enabled, IPC is disabled 00:09:51.636 EAL: Heap on socket 0 was shrunk by 2MB 00:09:51.636 EAL: No shared files mode enabled, IPC is disabled 00:09:51.636 EAL: No shared files mode enabled, IPC is disabled 00:09:51.636 EAL: No shared files mode enabled, IPC is disabled 00:09:51.945 00:09:51.945 real 0m2.850s 00:09:51.945 user 0m1.450s 00:09:51.945 sys 0m1.253s 00:09:51.945 ************************************ 00:09:51.945 END TEST env_vtophys 00:09:51.945 ************************************ 00:09:51.945 00:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.945 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:51.945 00:52:26 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:51.945 00:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.945 00:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.945 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:51.945 ************************************ 00:09:51.945 START TEST env_pci 00:09:51.945 ************************************ 00:09:51.945 00:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:51.945 00:09:51.945 00:09:51.945 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.945 http://cunit.sourceforge.net/ 00:09:51.945 00:09:51.945 00:09:51.945 Suite: pci 00:09:51.945 Test: pci_hook ...[2024-11-18 00:52:26.159239] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 114874 has claimed it 00:09:51.945 EAL: Cannot find device (10000:00:01.0) 00:09:51.945 EAL: Failed to attach device on primary process 00:09:51.945 passed 00:09:51.945 00:09:51.945 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.945 suites 1 1 n/a 0 0 00:09:51.945 tests 1 1 1 0 0 00:09:51.945 asserts 25 25 25 0 n/a 00:09:51.945 00:09:51.945 Elapsed time = 0.008 seconds 00:09:51.945 00:09:51.945 real 0m0.088s 00:09:51.945 user 0m0.041s 00:09:51.945 sys 0m0.047s 00:09:51.945 00:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.945 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:51.945 ************************************ 00:09:51.945 END TEST env_pci 00:09:51.945 ************************************ 00:09:51.945 00:52:26 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:51.945 00:52:26 -- env/env.sh@15 -- # uname 00:09:51.945 00:52:26 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:51.945 00:52:26 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:51.945 00:52:26 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:51.945 00:52:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:51.945 00:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.945 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:51.945 ************************************ 00:09:51.945 START TEST env_dpdk_post_init 00:09:51.945 ************************************ 00:09:51.945 00:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:52.204 EAL: Detected CPU lcores: 10 00:09:52.204 EAL: Detected NUMA nodes: 1 00:09:52.204 EAL: Detected static linkage of DPDK 00:09:52.204 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:52.204 EAL: Selected IOVA mode 'PA' 00:09:52.204 EAL: VFIO support initialized 00:09:52.204 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:52.204 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:09:52.204 Starting DPDK initialization... 00:09:52.204 Starting SPDK post initialization... 00:09:52.204 SPDK NVMe probe 00:09:52.204 Attaching to 0000:00:06.0 00:09:52.204 Attached to 0000:00:06.0 00:09:52.204 Cleaning up... 00:09:52.204 00:09:52.204 real 0m0.262s 00:09:52.204 user 0m0.079s 00:09:52.204 sys 0m0.084s 00:09:52.204 ************************************ 00:09:52.204 END TEST env_dpdk_post_init 00:09:52.204 ************************************ 00:09:52.204 00:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.204 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 00:52:26 -- env/env.sh@26 -- # uname 00:09:52.463 00:52:26 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:52.463 00:52:26 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:52.463 00:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.463 00:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.463 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 ************************************ 00:09:52.463 START TEST env_mem_callbacks 00:09:52.463 ************************************ 00:09:52.463 00:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:52.463 EAL: Detected CPU lcores: 10 00:09:52.463 EAL: Detected NUMA nodes: 1 00:09:52.463 EAL: Detected static linkage of DPDK 00:09:52.463 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:52.463 EAL: Selected IOVA mode 'PA' 00:09:52.463 EAL: VFIO support initialized 00:09:52.463 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:52.463 00:09:52.463 00:09:52.463 CUnit - A unit testing framework for C - Version 2.1-3 00:09:52.463 http://cunit.sourceforge.net/ 00:09:52.463 00:09:52.463 00:09:52.463 Suite: memory 00:09:52.463 Test: test ... 00:09:52.463 register 0x200000200000 2097152 00:09:52.463 malloc 3145728 00:09:52.463 register 0x200000400000 4194304 00:09:52.463 buf 0x200000500000 len 3145728 PASSED 00:09:52.463 malloc 64 00:09:52.463 buf 0x2000004fff40 len 64 PASSED 00:09:52.463 malloc 4194304 00:09:52.463 register 0x200000800000 6291456 00:09:52.463 buf 0x200000a00000 len 4194304 PASSED 00:09:52.463 free 0x200000500000 3145728 00:09:52.463 free 0x2000004fff40 64 00:09:52.463 unregister 0x200000400000 4194304 PASSED 00:09:52.463 free 0x200000a00000 4194304 00:09:52.463 unregister 0x200000800000 6291456 PASSED 00:09:52.463 malloc 8388608 00:09:52.463 register 0x200000400000 10485760 00:09:52.463 buf 0x200000600000 len 8388608 PASSED 00:09:52.463 free 0x200000600000 8388608 00:09:52.463 unregister 0x200000400000 10485760 PASSED 00:09:52.463 passed 00:09:52.463 00:09:52.463 Run Summary: Type Total Ran Passed Failed Inactive 00:09:52.463 suites 1 1 n/a 0 0 00:09:52.463 tests 1 1 1 0 0 00:09:52.463 asserts 15 15 15 0 n/a 00:09:52.463 00:09:52.463 Elapsed time = 0.008 seconds 00:09:52.463 00:09:52.463 real 0m0.219s 00:09:52.463 user 0m0.049s 00:09:52.463 sys 0m0.067s 00:09:52.463 00:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.463 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 ************************************ 00:09:52.463 END TEST env_mem_callbacks 00:09:52.463 ************************************ 00:09:52.829 ************************************ 00:09:52.829 END TEST env 00:09:52.829 ************************************ 00:09:52.829 00:09:52.829 real 0m4.345s 00:09:52.829 user 0m2.253s 00:09:52.829 sys 0m1.746s 00:09:52.829 00:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.829 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:52.829 00:52:26 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:52.829 00:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.829 00:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.829 00:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:52.829 ************************************ 00:09:52.829 START TEST rpc 00:09:52.829 ************************************ 00:09:52.829 00:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:52.829 * Looking for test storage... 00:09:52.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:52.829 00:52:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:52.829 00:52:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:52.829 00:52:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:52.829 00:52:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:52.829 00:52:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:52.829 00:52:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:52.829 00:52:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:52.829 00:52:27 -- scripts/common.sh@335 -- # IFS=.-: 00:09:52.829 00:52:27 -- scripts/common.sh@335 -- # read -ra ver1 00:09:52.829 00:52:27 -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.829 00:52:27 -- scripts/common.sh@336 -- # read -ra ver2 00:09:52.829 00:52:27 -- scripts/common.sh@337 -- # local 'op=<' 00:09:52.829 00:52:27 -- scripts/common.sh@339 -- # ver1_l=2 00:09:52.829 00:52:27 -- scripts/common.sh@340 -- # ver2_l=1 00:09:52.829 00:52:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:52.829 00:52:27 -- scripts/common.sh@343 -- # case "$op" in 00:09:52.829 00:52:27 -- scripts/common.sh@344 -- # : 1 00:09:52.829 00:52:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:52.829 00:52:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.829 00:52:27 -- scripts/common.sh@364 -- # decimal 1 00:09:52.829 00:52:27 -- scripts/common.sh@352 -- # local d=1 00:09:52.829 00:52:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.829 00:52:27 -- scripts/common.sh@354 -- # echo 1 00:09:52.829 00:52:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:52.829 00:52:27 -- scripts/common.sh@365 -- # decimal 2 00:09:52.829 00:52:27 -- scripts/common.sh@352 -- # local d=2 00:09:52.829 00:52:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.829 00:52:27 -- scripts/common.sh@354 -- # echo 2 00:09:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.829 00:52:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:52.829 00:52:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:52.829 00:52:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:52.829 00:52:27 -- scripts/common.sh@367 -- # return 0 00:09:52.829 00:52:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.829 00:52:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.829 --rc genhtml_branch_coverage=1 00:09:52.829 --rc genhtml_function_coverage=1 00:09:52.829 --rc genhtml_legend=1 00:09:52.829 --rc geninfo_all_blocks=1 00:09:52.829 --rc geninfo_unexecuted_blocks=1 00:09:52.829 00:09:52.829 ' 00:09:52.829 00:52:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.829 --rc genhtml_branch_coverage=1 00:09:52.829 --rc genhtml_function_coverage=1 00:09:52.829 --rc genhtml_legend=1 00:09:52.829 --rc geninfo_all_blocks=1 00:09:52.829 --rc geninfo_unexecuted_blocks=1 00:09:52.829 00:09:52.829 ' 00:09:52.829 00:52:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.829 --rc genhtml_branch_coverage=1 00:09:52.829 --rc genhtml_function_coverage=1 00:09:52.829 --rc genhtml_legend=1 00:09:52.829 --rc geninfo_all_blocks=1 00:09:52.829 --rc geninfo_unexecuted_blocks=1 00:09:52.829 00:09:52.829 ' 00:09:52.829 00:52:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.829 --rc genhtml_branch_coverage=1 00:09:52.829 --rc genhtml_function_coverage=1 00:09:52.829 --rc genhtml_legend=1 00:09:52.829 --rc geninfo_all_blocks=1 00:09:52.829 --rc geninfo_unexecuted_blocks=1 00:09:52.829 00:09:52.829 ' 00:09:52.829 00:52:27 -- rpc/rpc.sh@65 -- # spdk_pid=115012 00:09:52.829 00:52:27 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:52.829 00:52:27 -- rpc/rpc.sh@67 -- # waitforlisten 115012 00:09:52.829 00:52:27 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:52.830 00:52:27 -- common/autotest_common.sh@829 -- # '[' -z 115012 ']' 00:09:52.830 00:52:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.830 00:52:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.830 00:52:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.830 00:52:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.830 00:52:27 -- common/autotest_common.sh@10 -- # set +x 00:09:53.116 [2024-11-18 00:52:27.265584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:53.116 [2024-11-18 00:52:27.266832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115012 ] 00:09:53.116 [2024-11-18 00:52:27.420344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.448 [2024-11-18 00:52:27.526930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.448 [2024-11-18 00:52:27.527326] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:53.448 [2024-11-18 00:52:27.527460] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 115012' to capture a snapshot of events at runtime. 00:09:53.449 [2024-11-18 00:52:27.527556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid115012 for offline analysis/debug. 00:09:53.449 [2024-11-18 00:52:27.527745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.043 00:52:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.043 00:52:28 -- common/autotest_common.sh@862 -- # return 0 00:09:54.043 00:52:28 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.043 00:52:28 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.043 00:52:28 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:54.043 00:52:28 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:54.043 00:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.043 00:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 ************************************ 00:09:54.043 START TEST rpc_integrity 00:09:54.043 ************************************ 00:09:54.043 00:52:28 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:09:54.043 00:52:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:54.043 00:52:28 -- rpc/rpc.sh@13 -- # jq length 00:09:54.043 00:52:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:54.043 00:52:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:54.043 00:52:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:54.043 { 00:09:54.043 "name": "Malloc0", 00:09:54.043 "aliases": [ 00:09:54.043 "d7bb6756-82ca-43b7-bdfb-6ab1e264f94b" 00:09:54.043 ], 00:09:54.043 "product_name": "Malloc disk", 00:09:54.043 "block_size": 512, 00:09:54.043 "num_blocks": 16384, 00:09:54.043 "uuid": "d7bb6756-82ca-43b7-bdfb-6ab1e264f94b", 00:09:54.043 "assigned_rate_limits": { 00:09:54.043 "rw_ios_per_sec": 0, 00:09:54.043 "rw_mbytes_per_sec": 0, 00:09:54.043 "r_mbytes_per_sec": 0, 00:09:54.043 "w_mbytes_per_sec": 0 00:09:54.043 }, 00:09:54.043 "claimed": false, 00:09:54.043 "zoned": false, 00:09:54.043 "supported_io_types": { 00:09:54.043 "read": true, 00:09:54.043 "write": true, 00:09:54.043 "unmap": true, 00:09:54.043 "write_zeroes": true, 00:09:54.043 "flush": true, 00:09:54.043 "reset": true, 00:09:54.043 "compare": false, 00:09:54.043 "compare_and_write": false, 00:09:54.043 "abort": true, 00:09:54.043 "nvme_admin": false, 00:09:54.043 "nvme_io": false 00:09:54.043 }, 00:09:54.043 "memory_domains": [ 00:09:54.043 { 00:09:54.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.043 "dma_device_type": 2 00:09:54.043 } 00:09:54.043 ], 00:09:54.043 "driver_specific": {} 00:09:54.043 } 00:09:54.043 ]' 00:09:54.043 00:52:28 -- rpc/rpc.sh@17 -- # jq length 00:09:54.043 00:52:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:54.043 00:52:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 [2024-11-18 00:52:28.336481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:54.043 [2024-11-18 00:52:28.336594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.043 [2024-11-18 00:52:28.336652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080 00:09:54.043 [2024-11-18 00:52:28.336686] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.043 [2024-11-18 00:52:28.339642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.043 [2024-11-18 00:52:28.339728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:54.043 Passthru0 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:54.043 { 00:09:54.043 "name": "Malloc0", 00:09:54.043 "aliases": [ 00:09:54.043 "d7bb6756-82ca-43b7-bdfb-6ab1e264f94b" 00:09:54.043 ], 00:09:54.043 "product_name": "Malloc disk", 00:09:54.043 "block_size": 512, 00:09:54.043 "num_blocks": 16384, 00:09:54.043 "uuid": "d7bb6756-82ca-43b7-bdfb-6ab1e264f94b", 00:09:54.043 "assigned_rate_limits": { 00:09:54.043 "rw_ios_per_sec": 0, 00:09:54.043 "rw_mbytes_per_sec": 0, 00:09:54.043 "r_mbytes_per_sec": 0, 00:09:54.043 "w_mbytes_per_sec": 0 00:09:54.043 }, 00:09:54.043 "claimed": true, 00:09:54.043 "claim_type": "exclusive_write", 00:09:54.043 "zoned": false, 00:09:54.043 "supported_io_types": { 00:09:54.043 "read": true, 00:09:54.043 "write": true, 00:09:54.043 "unmap": true, 00:09:54.043 "write_zeroes": true, 00:09:54.043 "flush": true, 00:09:54.043 "reset": true, 00:09:54.043 "compare": false, 00:09:54.043 "compare_and_write": false, 00:09:54.043 "abort": true, 00:09:54.043 "nvme_admin": false, 00:09:54.043 "nvme_io": false 00:09:54.043 }, 00:09:54.043 "memory_domains": [ 00:09:54.043 { 00:09:54.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.043 "dma_device_type": 2 00:09:54.043 } 00:09:54.043 ], 00:09:54.043 "driver_specific": {} 00:09:54.043 }, 00:09:54.043 { 00:09:54.043 "name": "Passthru0", 00:09:54.043 "aliases": [ 00:09:54.043 "acb0a5df-131c-58d6-91f4-9bd95132fe53" 00:09:54.043 ], 00:09:54.043 "product_name": "passthru", 00:09:54.043 "block_size": 512, 00:09:54.043 "num_blocks": 16384, 00:09:54.043 "uuid": "acb0a5df-131c-58d6-91f4-9bd95132fe53", 00:09:54.043 "assigned_rate_limits": { 00:09:54.043 "rw_ios_per_sec": 0, 00:09:54.043 "rw_mbytes_per_sec": 0, 00:09:54.043 "r_mbytes_per_sec": 0, 00:09:54.043 "w_mbytes_per_sec": 0 00:09:54.043 }, 00:09:54.043 "claimed": false, 00:09:54.043 "zoned": false, 00:09:54.043 "supported_io_types": { 00:09:54.043 "read": true, 00:09:54.043 "write": true, 00:09:54.043 "unmap": true, 00:09:54.043 "write_zeroes": true, 00:09:54.043 "flush": true, 00:09:54.043 "reset": true, 00:09:54.043 "compare": false, 00:09:54.043 "compare_and_write": false, 00:09:54.043 "abort": true, 00:09:54.043 "nvme_admin": false, 00:09:54.043 "nvme_io": false 00:09:54.043 }, 00:09:54.043 "memory_domains": [ 00:09:54.043 { 00:09:54.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.043 "dma_device_type": 2 00:09:54.043 } 00:09:54.043 ], 00:09:54.043 "driver_specific": { 00:09:54.043 "passthru": { 00:09:54.043 "name": "Passthru0", 00:09:54.043 "base_bdev_name": "Malloc0" 00:09:54.043 } 00:09:54.043 } 00:09:54.043 } 00:09:54.043 ]' 00:09:54.043 00:52:28 -- rpc/rpc.sh@21 -- # jq length 00:09:54.043 00:52:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:54.043 00:52:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:54.043 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.043 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.043 00:52:28 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:54.043 00:52:28 -- rpc/rpc.sh@26 -- # jq length 00:09:54.303 00:52:28 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:54.303 00:09:54.303 real 0m0.287s 00:09:54.303 user 0m0.175s 00:09:54.303 sys 0m0.039s 00:09:54.303 00:52:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.303 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.303 ************************************ 00:09:54.303 END TEST rpc_integrity 00:09:54.303 ************************************ 00:09:54.303 00:52:28 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:54.303 00:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.303 00:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.303 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.303 ************************************ 00:09:54.303 START TEST rpc_plugins 00:09:54.303 ************************************ 00:09:54.303 00:52:28 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:09:54.303 00:52:28 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:54.303 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.303 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.303 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.303 00:52:28 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:54.303 00:52:28 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:54.303 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.303 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.303 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.303 00:52:28 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:54.303 { 00:09:54.303 "name": "Malloc1", 00:09:54.303 "aliases": [ 00:09:54.303 "3f4daab5-83fb-495c-84f6-ba24819d3b1e" 00:09:54.303 ], 00:09:54.303 "product_name": "Malloc disk", 00:09:54.303 "block_size": 4096, 00:09:54.303 "num_blocks": 256, 00:09:54.303 "uuid": "3f4daab5-83fb-495c-84f6-ba24819d3b1e", 00:09:54.303 "assigned_rate_limits": { 00:09:54.303 "rw_ios_per_sec": 0, 00:09:54.303 "rw_mbytes_per_sec": 0, 00:09:54.303 "r_mbytes_per_sec": 0, 00:09:54.303 "w_mbytes_per_sec": 0 00:09:54.303 }, 00:09:54.303 "claimed": false, 00:09:54.303 "zoned": false, 00:09:54.303 "supported_io_types": { 00:09:54.303 "read": true, 00:09:54.303 "write": true, 00:09:54.303 "unmap": true, 00:09:54.303 "write_zeroes": true, 00:09:54.303 "flush": true, 00:09:54.303 "reset": true, 00:09:54.303 "compare": false, 00:09:54.303 "compare_and_write": false, 00:09:54.303 "abort": true, 00:09:54.303 "nvme_admin": false, 00:09:54.303 "nvme_io": false 00:09:54.303 }, 00:09:54.303 "memory_domains": [ 00:09:54.303 { 00:09:54.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.303 "dma_device_type": 2 00:09:54.303 } 00:09:54.303 ], 00:09:54.303 "driver_specific": {} 00:09:54.303 } 00:09:54.303 ]' 00:09:54.303 00:52:28 -- rpc/rpc.sh@32 -- # jq length 00:09:54.303 00:52:28 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:54.303 00:52:28 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:54.303 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.303 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.303 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.303 00:52:28 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:54.303 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.303 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.303 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.303 00:52:28 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:54.303 00:52:28 -- rpc/rpc.sh@36 -- # jq length 00:09:54.303 00:52:28 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:54.303 00:09:54.303 real 0m0.141s 00:09:54.303 user 0m0.084s 00:09:54.303 sys 0m0.025s 00:09:54.303 00:52:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.303 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.303 ************************************ 00:09:54.303 END TEST rpc_plugins 00:09:54.303 ************************************ 00:09:54.562 00:52:28 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:54.562 00:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.562 00:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.562 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.562 ************************************ 00:09:54.562 START TEST rpc_trace_cmd_test 00:09:54.562 ************************************ 00:09:54.562 00:52:28 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:09:54.562 00:52:28 -- rpc/rpc.sh@40 -- # local info 00:09:54.562 00:52:28 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:54.562 00:52:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.562 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.562 00:52:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.562 00:52:28 -- rpc/rpc.sh@42 -- # info='{ 00:09:54.562 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid115012", 00:09:54.562 "tpoint_group_mask": "0x8", 00:09:54.562 "iscsi_conn": { 00:09:54.562 "mask": "0x2", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "scsi": { 00:09:54.562 "mask": "0x4", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "bdev": { 00:09:54.562 "mask": "0x8", 00:09:54.562 "tpoint_mask": "0xffffffffffffffff" 00:09:54.562 }, 00:09:54.562 "nvmf_rdma": { 00:09:54.562 "mask": "0x10", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "nvmf_tcp": { 00:09:54.562 "mask": "0x20", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "ftl": { 00:09:54.562 "mask": "0x40", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "blobfs": { 00:09:54.562 "mask": "0x80", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "dsa": { 00:09:54.562 "mask": "0x200", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "thread": { 00:09:54.562 "mask": "0x400", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "nvme_pcie": { 00:09:54.562 "mask": "0x800", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "iaa": { 00:09:54.562 "mask": "0x1000", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "nvme_tcp": { 00:09:54.562 "mask": "0x2000", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 }, 00:09:54.562 "bdev_nvme": { 00:09:54.562 "mask": "0x4000", 00:09:54.562 "tpoint_mask": "0x0" 00:09:54.562 } 00:09:54.562 }' 00:09:54.562 00:52:28 -- rpc/rpc.sh@43 -- # jq length 00:09:54.562 00:52:28 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:09:54.562 00:52:28 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:54.562 00:52:28 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:54.562 00:52:28 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:54.562 00:52:28 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:54.562 00:52:28 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:54.562 00:52:28 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:54.562 00:52:28 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:54.821 00:52:28 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:54.821 00:09:54.821 real 0m0.237s 00:09:54.821 user 0m0.200s 00:09:54.821 sys 0m0.031s 00:09:54.821 00:52:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.821 00:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:54.821 ************************************ 00:09:54.821 END TEST rpc_trace_cmd_test 00:09:54.821 ************************************ 00:09:54.821 00:52:29 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:54.821 00:52:29 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:54.821 00:52:29 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:54.821 00:52:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.821 00:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.821 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:54.821 ************************************ 00:09:54.821 START TEST rpc_daemon_integrity 00:09:54.821 ************************************ 00:09:54.821 00:52:29 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:09:54.821 00:52:29 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:54.821 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.821 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:54.821 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.821 00:52:29 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:54.821 00:52:29 -- rpc/rpc.sh@13 -- # jq length 00:09:54.821 00:52:29 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:54.821 00:52:29 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:54.821 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.821 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:54.821 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.821 00:52:29 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:54.821 00:52:29 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:54.821 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.821 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:54.821 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.821 00:52:29 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:54.821 { 00:09:54.821 "name": "Malloc2", 00:09:54.821 "aliases": [ 00:09:54.821 "b617adf3-c5f1-41dd-8f83-5a676a21c48e" 00:09:54.821 ], 00:09:54.821 "product_name": "Malloc disk", 00:09:54.821 "block_size": 512, 00:09:54.821 "num_blocks": 16384, 00:09:54.821 "uuid": "b617adf3-c5f1-41dd-8f83-5a676a21c48e", 00:09:54.821 "assigned_rate_limits": { 00:09:54.821 "rw_ios_per_sec": 0, 00:09:54.821 "rw_mbytes_per_sec": 0, 00:09:54.821 "r_mbytes_per_sec": 0, 00:09:54.821 "w_mbytes_per_sec": 0 00:09:54.821 }, 00:09:54.821 "claimed": false, 00:09:54.821 "zoned": false, 00:09:54.821 "supported_io_types": { 00:09:54.821 "read": true, 00:09:54.821 "write": true, 00:09:54.821 "unmap": true, 00:09:54.821 "write_zeroes": true, 00:09:54.821 "flush": true, 00:09:54.821 "reset": true, 00:09:54.821 "compare": false, 00:09:54.821 "compare_and_write": false, 00:09:54.821 "abort": true, 00:09:54.821 "nvme_admin": false, 00:09:54.821 "nvme_io": false 00:09:54.821 }, 00:09:54.821 "memory_domains": [ 00:09:54.821 { 00:09:54.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.821 "dma_device_type": 2 00:09:54.821 } 00:09:54.821 ], 00:09:54.821 "driver_specific": {} 00:09:54.821 } 00:09:54.821 ]' 00:09:54.821 00:52:29 -- rpc/rpc.sh@17 -- # jq length 00:09:54.821 00:52:29 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:54.821 00:52:29 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:54.821 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.821 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:54.821 [2024-11-18 00:52:29.215796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:54.821 [2024-11-18 00:52:29.215895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.821 [2024-11-18 00:52:29.215941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.821 [2024-11-18 00:52:29.215964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.821 [2024-11-18 00:52:29.218795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.821 [2024-11-18 00:52:29.218867] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:54.821 Passthru0 00:09:54.821 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.821 00:52:29 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:54.821 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.821 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:55.081 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.081 00:52:29 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:55.081 { 00:09:55.081 "name": "Malloc2", 00:09:55.081 "aliases": [ 00:09:55.081 "b617adf3-c5f1-41dd-8f83-5a676a21c48e" 00:09:55.081 ], 00:09:55.081 "product_name": "Malloc disk", 00:09:55.081 "block_size": 512, 00:09:55.081 "num_blocks": 16384, 00:09:55.081 "uuid": "b617adf3-c5f1-41dd-8f83-5a676a21c48e", 00:09:55.081 "assigned_rate_limits": { 00:09:55.081 "rw_ios_per_sec": 0, 00:09:55.081 "rw_mbytes_per_sec": 0, 00:09:55.081 "r_mbytes_per_sec": 0, 00:09:55.081 "w_mbytes_per_sec": 0 00:09:55.081 }, 00:09:55.081 "claimed": true, 00:09:55.081 "claim_type": "exclusive_write", 00:09:55.081 "zoned": false, 00:09:55.081 "supported_io_types": { 00:09:55.081 "read": true, 00:09:55.081 "write": true, 00:09:55.081 "unmap": true, 00:09:55.081 "write_zeroes": true, 00:09:55.081 "flush": true, 00:09:55.081 "reset": true, 00:09:55.081 "compare": false, 00:09:55.081 "compare_and_write": false, 00:09:55.081 "abort": true, 00:09:55.081 "nvme_admin": false, 00:09:55.081 "nvme_io": false 00:09:55.081 }, 00:09:55.081 "memory_domains": [ 00:09:55.081 { 00:09:55.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.081 "dma_device_type": 2 00:09:55.081 } 00:09:55.081 ], 00:09:55.081 "driver_specific": {} 00:09:55.081 }, 00:09:55.081 { 00:09:55.081 "name": "Passthru0", 00:09:55.081 "aliases": [ 00:09:55.081 "4d05e948-97f3-5d75-a57d-d96bbf0b7c2a" 00:09:55.081 ], 00:09:55.081 "product_name": "passthru", 00:09:55.081 "block_size": 512, 00:09:55.081 "num_blocks": 16384, 00:09:55.081 "uuid": "4d05e948-97f3-5d75-a57d-d96bbf0b7c2a", 00:09:55.081 "assigned_rate_limits": { 00:09:55.081 "rw_ios_per_sec": 0, 00:09:55.081 "rw_mbytes_per_sec": 0, 00:09:55.081 "r_mbytes_per_sec": 0, 00:09:55.081 "w_mbytes_per_sec": 0 00:09:55.081 }, 00:09:55.081 "claimed": false, 00:09:55.081 "zoned": false, 00:09:55.081 "supported_io_types": { 00:09:55.081 "read": true, 00:09:55.081 "write": true, 00:09:55.081 "unmap": true, 00:09:55.081 "write_zeroes": true, 00:09:55.081 "flush": true, 00:09:55.081 "reset": true, 00:09:55.081 "compare": false, 00:09:55.081 "compare_and_write": false, 00:09:55.081 "abort": true, 00:09:55.081 "nvme_admin": false, 00:09:55.081 "nvme_io": false 00:09:55.081 }, 00:09:55.081 "memory_domains": [ 00:09:55.081 { 00:09:55.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.081 "dma_device_type": 2 00:09:55.081 } 00:09:55.081 ], 00:09:55.081 "driver_specific": { 00:09:55.081 "passthru": { 00:09:55.081 "name": "Passthru0", 00:09:55.081 "base_bdev_name": "Malloc2" 00:09:55.081 } 00:09:55.081 } 00:09:55.081 } 00:09:55.081 ]' 00:09:55.081 00:52:29 -- rpc/rpc.sh@21 -- # jq length 00:09:55.081 00:52:29 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:55.081 00:52:29 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:55.081 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.081 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:55.081 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.081 00:52:29 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:55.081 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.081 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:55.081 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.081 00:52:29 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:55.081 00:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.081 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:55.081 00:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.081 00:52:29 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:55.081 00:52:29 -- rpc/rpc.sh@26 -- # jq length 00:09:55.081 00:52:29 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:55.081 00:09:55.081 real 0m0.307s 00:09:55.081 user 0m0.197s 00:09:55.081 sys 0m0.039s 00:09:55.081 00:52:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.081 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:55.081 ************************************ 00:09:55.081 END TEST rpc_daemon_integrity 00:09:55.081 ************************************ 00:09:55.081 00:52:29 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:55.081 00:52:29 -- rpc/rpc.sh@84 -- # killprocess 115012 00:09:55.081 00:52:29 -- common/autotest_common.sh@936 -- # '[' -z 115012 ']' 00:09:55.081 00:52:29 -- common/autotest_common.sh@940 -- # kill -0 115012 00:09:55.081 00:52:29 -- common/autotest_common.sh@941 -- # uname 00:09:55.081 00:52:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:55.081 00:52:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115012 00:09:55.081 00:52:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:55.081 00:52:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:55.081 00:52:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115012' 00:09:55.081 killing process with pid 115012 00:09:55.081 00:52:29 -- common/autotest_common.sh@955 -- # kill 115012 00:09:55.081 00:52:29 -- common/autotest_common.sh@960 -- # wait 115012 00:09:56.019 00:09:56.019 real 0m3.172s 00:09:56.019 user 0m3.714s 00:09:56.019 sys 0m0.964s 00:09:56.019 00:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.019 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:56.019 ************************************ 00:09:56.019 END TEST rpc 00:09:56.019 ************************************ 00:09:56.019 00:52:30 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:56.019 00:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.019 00:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.019 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:56.019 ************************************ 00:09:56.019 START TEST rpc_client 00:09:56.019 ************************************ 00:09:56.019 00:52:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:56.019 * Looking for test storage... 00:09:56.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:56.019 00:52:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:56.019 00:52:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:56.019 00:52:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:56.019 00:52:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:56.019 00:52:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:56.019 00:52:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:56.019 00:52:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:56.019 00:52:30 -- scripts/common.sh@335 -- # IFS=.-: 00:09:56.019 00:52:30 -- scripts/common.sh@335 -- # read -ra ver1 00:09:56.019 00:52:30 -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.019 00:52:30 -- scripts/common.sh@336 -- # read -ra ver2 00:09:56.019 00:52:30 -- scripts/common.sh@337 -- # local 'op=<' 00:09:56.019 00:52:30 -- scripts/common.sh@339 -- # ver1_l=2 00:09:56.019 00:52:30 -- scripts/common.sh@340 -- # ver2_l=1 00:09:56.019 00:52:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:56.019 00:52:30 -- scripts/common.sh@343 -- # case "$op" in 00:09:56.019 00:52:30 -- scripts/common.sh@344 -- # : 1 00:09:56.019 00:52:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:56.019 00:52:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.019 00:52:30 -- scripts/common.sh@364 -- # decimal 1 00:09:56.019 00:52:30 -- scripts/common.sh@352 -- # local d=1 00:09:56.019 00:52:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.019 00:52:30 -- scripts/common.sh@354 -- # echo 1 00:09:56.019 00:52:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:56.019 00:52:30 -- scripts/common.sh@365 -- # decimal 2 00:09:56.019 00:52:30 -- scripts/common.sh@352 -- # local d=2 00:09:56.019 00:52:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.019 00:52:30 -- scripts/common.sh@354 -- # echo 2 00:09:56.019 00:52:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:56.019 00:52:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:56.019 00:52:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:56.019 00:52:30 -- scripts/common.sh@367 -- # return 0 00:09:56.019 00:52:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.019 00:52:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.019 --rc genhtml_branch_coverage=1 00:09:56.019 --rc genhtml_function_coverage=1 00:09:56.019 --rc genhtml_legend=1 00:09:56.019 --rc geninfo_all_blocks=1 00:09:56.019 --rc geninfo_unexecuted_blocks=1 00:09:56.019 00:09:56.019 ' 00:09:56.019 00:52:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.019 --rc genhtml_branch_coverage=1 00:09:56.019 --rc genhtml_function_coverage=1 00:09:56.019 --rc genhtml_legend=1 00:09:56.019 --rc geninfo_all_blocks=1 00:09:56.019 --rc geninfo_unexecuted_blocks=1 00:09:56.019 00:09:56.019 ' 00:09:56.019 00:52:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.019 --rc genhtml_branch_coverage=1 00:09:56.019 --rc genhtml_function_coverage=1 00:09:56.019 --rc genhtml_legend=1 00:09:56.019 --rc geninfo_all_blocks=1 00:09:56.019 --rc geninfo_unexecuted_blocks=1 00:09:56.019 00:09:56.019 ' 00:09:56.019 00:52:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.019 --rc genhtml_branch_coverage=1 00:09:56.019 --rc genhtml_function_coverage=1 00:09:56.019 --rc genhtml_legend=1 00:09:56.019 --rc geninfo_all_blocks=1 00:09:56.019 --rc geninfo_unexecuted_blocks=1 00:09:56.019 00:09:56.019 ' 00:09:56.019 00:52:30 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:56.277 OK 00:09:56.277 00:52:30 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:56.277 00:09:56.277 real 0m0.252s 00:09:56.277 user 0m0.140s 00:09:56.277 sys 0m0.136s 00:09:56.277 00:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.277 ************************************ 00:09:56.277 END TEST rpc_client 00:09:56.277 ************************************ 00:09:56.277 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:56.277 00:52:30 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:56.277 00:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.278 00:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.278 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:56.278 ************************************ 00:09:56.278 START TEST json_config 00:09:56.278 ************************************ 00:09:56.278 00:52:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:56.278 00:52:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:56.278 00:52:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:56.278 00:52:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:56.278 00:52:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:56.278 00:52:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:56.278 00:52:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:56.278 00:52:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:56.278 00:52:30 -- scripts/common.sh@335 -- # IFS=.-: 00:09:56.278 00:52:30 -- scripts/common.sh@335 -- # read -ra ver1 00:09:56.278 00:52:30 -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.278 00:52:30 -- scripts/common.sh@336 -- # read -ra ver2 00:09:56.278 00:52:30 -- scripts/common.sh@337 -- # local 'op=<' 00:09:56.278 00:52:30 -- scripts/common.sh@339 -- # ver1_l=2 00:09:56.278 00:52:30 -- scripts/common.sh@340 -- # ver2_l=1 00:09:56.278 00:52:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:56.278 00:52:30 -- scripts/common.sh@343 -- # case "$op" in 00:09:56.278 00:52:30 -- scripts/common.sh@344 -- # : 1 00:09:56.278 00:52:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:56.278 00:52:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.278 00:52:30 -- scripts/common.sh@364 -- # decimal 1 00:09:56.278 00:52:30 -- scripts/common.sh@352 -- # local d=1 00:09:56.278 00:52:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.278 00:52:30 -- scripts/common.sh@354 -- # echo 1 00:09:56.278 00:52:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:56.278 00:52:30 -- scripts/common.sh@365 -- # decimal 2 00:09:56.278 00:52:30 -- scripts/common.sh@352 -- # local d=2 00:09:56.278 00:52:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.278 00:52:30 -- scripts/common.sh@354 -- # echo 2 00:09:56.278 00:52:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:56.278 00:52:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:56.536 00:52:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:56.536 00:52:30 -- scripts/common.sh@367 -- # return 0 00:09:56.536 00:52:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.536 00:52:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.536 --rc genhtml_branch_coverage=1 00:09:56.536 --rc genhtml_function_coverage=1 00:09:56.536 --rc genhtml_legend=1 00:09:56.536 --rc geninfo_all_blocks=1 00:09:56.536 --rc geninfo_unexecuted_blocks=1 00:09:56.536 00:09:56.536 ' 00:09:56.536 00:52:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.536 --rc genhtml_branch_coverage=1 00:09:56.536 --rc genhtml_function_coverage=1 00:09:56.536 --rc genhtml_legend=1 00:09:56.536 --rc geninfo_all_blocks=1 00:09:56.536 --rc geninfo_unexecuted_blocks=1 00:09:56.536 00:09:56.536 ' 00:09:56.536 00:52:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.536 --rc genhtml_branch_coverage=1 00:09:56.536 --rc genhtml_function_coverage=1 00:09:56.536 --rc genhtml_legend=1 00:09:56.536 --rc geninfo_all_blocks=1 00:09:56.536 --rc geninfo_unexecuted_blocks=1 00:09:56.536 00:09:56.536 ' 00:09:56.536 00:52:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.536 --rc genhtml_branch_coverage=1 00:09:56.536 --rc genhtml_function_coverage=1 00:09:56.536 --rc genhtml_legend=1 00:09:56.536 --rc geninfo_all_blocks=1 00:09:56.536 --rc geninfo_unexecuted_blocks=1 00:09:56.536 00:09:56.536 ' 00:09:56.536 00:52:30 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.536 00:52:30 -- nvmf/common.sh@7 -- # uname -s 00:09:56.536 00:52:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.536 00:52:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.536 00:52:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.536 00:52:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.536 00:52:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.536 00:52:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.536 00:52:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.536 00:52:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.536 00:52:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.536 00:52:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.536 00:52:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df108f0c-9acc-4fe1-91f1-5b9b098bb741 00:09:56.537 00:52:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=df108f0c-9acc-4fe1-91f1-5b9b098bb741 00:09:56.537 00:52:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.537 00:52:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.537 00:52:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:56.537 00:52:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.537 00:52:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.537 00:52:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.537 00:52:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.537 00:52:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:56.537 00:52:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:56.537 00:52:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:56.537 00:52:30 -- paths/export.sh@5 -- # export PATH 00:09:56.537 00:52:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:56.537 00:52:30 -- nvmf/common.sh@46 -- # : 0 00:09:56.537 00:52:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:56.537 00:52:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:56.537 00:52:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:56.537 00:52:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.537 00:52:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.537 00:52:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:56.537 00:52:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:56.537 00:52:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:56.537 00:52:30 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:09:56.537 00:52:30 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:09:56.537 00:52:30 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:09:56.537 00:52:30 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:56.537 00:52:30 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:09:56.537 00:52:30 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:09:56.537 00:52:30 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:56.537 00:52:30 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:09:56.537 00:52:30 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:56.537 00:52:30 -- json_config/json_config.sh@32 -- # declare -A app_params 00:09:56.537 00:52:30 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:56.537 00:52:30 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:09:56.537 00:52:30 -- json_config/json_config.sh@43 -- # last_event_id=0 00:09:56.537 00:52:30 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:56.537 INFO: JSON configuration test init 00:09:56.537 00:52:30 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:09:56.537 00:52:30 -- json_config/json_config.sh@420 -- # json_config_test_init 00:09:56.537 00:52:30 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:09:56.537 00:52:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.537 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:56.537 00:52:30 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:09:56.537 00:52:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.537 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:56.537 00:52:30 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:09:56.537 00:52:30 -- json_config/json_config.sh@98 -- # local app=target 00:09:56.537 00:52:30 -- json_config/json_config.sh@99 -- # shift 00:09:56.537 00:52:30 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:56.537 00:52:30 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:56.537 00:52:30 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:56.537 00:52:30 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:56.537 00:52:30 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:56.537 00:52:30 -- json_config/json_config.sh@111 -- # app_pid[$app]=115305 00:09:56.537 Waiting for target to run... 00:09:56.537 00:52:30 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:56.537 00:52:30 -- json_config/json_config.sh@114 -- # waitforlisten 115305 /var/tmp/spdk_tgt.sock 00:09:56.537 00:52:30 -- common/autotest_common.sh@829 -- # '[' -z 115305 ']' 00:09:56.537 00:52:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:56.537 00:52:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.537 00:52:30 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:56.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:56.537 00:52:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:56.537 00:52:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.537 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:56.537 [2024-11-18 00:52:30.786892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:56.537 [2024-11-18 00:52:30.787163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115305 ] 00:09:57.105 [2024-11-18 00:52:31.355141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.105 [2024-11-18 00:52:31.402739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.105 [2024-11-18 00:52:31.403015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.672 00:52:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.672 00:52:31 -- common/autotest_common.sh@862 -- # return 0 00:09:57.672 00:09:57.672 00:52:31 -- json_config/json_config.sh@115 -- # echo '' 00:09:57.672 00:52:31 -- json_config/json_config.sh@322 -- # create_accel_config 00:09:57.672 00:52:31 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:09:57.672 00:52:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.672 00:52:31 -- common/autotest_common.sh@10 -- # set +x 00:09:57.672 00:52:31 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:09:57.672 00:52:31 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:09:57.672 00:52:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.672 00:52:31 -- common/autotest_common.sh@10 -- # set +x 00:09:57.672 00:52:31 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:57.672 00:52:31 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:09:57.672 00:52:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:57.929 00:52:32 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:09:57.929 00:52:32 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:09:57.929 00:52:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.930 00:52:32 -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 00:52:32 -- json_config/json_config.sh@48 -- # local ret=0 00:09:57.930 00:52:32 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:57.930 00:52:32 -- json_config/json_config.sh@49 -- # local enabled_types 00:09:57.930 00:52:32 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:57.930 00:52:32 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:57.930 00:52:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:58.188 00:52:32 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:58.188 00:52:32 -- json_config/json_config.sh@51 -- # local get_types 00:09:58.188 00:52:32 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:58.188 00:52:32 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:09:58.188 00:52:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.188 00:52:32 -- common/autotest_common.sh@10 -- # set +x 00:09:58.188 00:52:32 -- json_config/json_config.sh@58 -- # return 0 00:09:58.188 00:52:32 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:09:58.188 00:52:32 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:09:58.188 00:52:32 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:09:58.188 00:52:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.188 00:52:32 -- common/autotest_common.sh@10 -- # set +x 00:09:58.188 00:52:32 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:09:58.188 00:52:32 -- json_config/json_config.sh@160 -- # local expected_notifications 00:09:58.188 00:52:32 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:09:58.188 00:52:32 -- json_config/json_config.sh@164 -- # get_notifications 00:09:58.188 00:52:32 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:58.188 00:52:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:58.188 00:52:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:58.188 00:52:32 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:58.188 00:52:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:58.188 00:52:32 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:58.447 00:52:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:58.447 00:52:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:58.447 00:52:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:58.447 00:52:32 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:09:58.447 00:52:32 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:09:58.447 00:52:32 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:58.447 00:52:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:58.706 Nvme0n1p0 Nvme0n1p1 00:09:58.706 00:52:33 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:58.706 00:52:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:58.965 [2024-11-18 00:52:33.307298] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:58.965 [2024-11-18 00:52:33.307443] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:58.965 00:09:58.965 00:52:33 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:58.965 00:52:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:59.223 Malloc3 00:09:59.223 00:52:33 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:59.223 00:52:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:59.482 [2024-11-18 00:52:33.743476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:59.482 [2024-11-18 00:52:33.743614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.482 [2024-11-18 00:52:33.743667] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:59.482 [2024-11-18 00:52:33.743704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.482 [2024-11-18 00:52:33.746673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.482 [2024-11-18 00:52:33.746728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:59.482 PTBdevFromMalloc3 00:09:59.482 00:52:33 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:59.482 00:52:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:59.739 Null0 00:09:59.739 00:52:33 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:59.739 00:52:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:59.739 Malloc0 00:09:59.739 00:52:34 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:59.739 00:52:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:59.998 Malloc1 00:09:59.998 00:52:34 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:59.998 00:52:34 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:00.255 102400+0 records in 00:10:00.255 102400+0 records out 00:10:00.255 104857600 bytes (105 MB, 100 MiB) copied, 0.35089 s, 299 MB/s 00:10:00.255 00:52:34 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:00.255 00:52:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:00.512 aio_disk 00:10:00.512 00:52:34 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:00.512 00:52:34 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:00.512 00:52:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:00.771 ec11967e-4e72-4be6-b22a-88f5defaca7d 00:10:00.771 00:52:35 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:00.771 00:52:35 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:00.771 00:52:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:01.030 00:52:35 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:01.030 00:52:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:01.289 00:52:35 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:01.289 00:52:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:01.547 00:52:35 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:01.547 00:52:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:01.807 00:52:35 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:10:01.807 00:52:35 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:10:01.807 00:52:35 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:45083f67-cc4d-419c-80ff-17d8630ff1ca bdev_register:8c034bb7-b4c8-4dee-95ac-5dcbd79d382c bdev_register:a139095e-f4e1-47b2-9438-fc33ab82f857 bdev_register:6f82437d-f838-4443-87d2-6cefd7f034be 00:10:01.807 00:52:35 -- json_config/json_config.sh@70 -- # local events_to_check 00:10:01.807 00:52:35 -- json_config/json_config.sh@71 -- # local recorded_events 00:10:01.807 00:52:35 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:01.807 00:52:35 -- json_config/json_config.sh@74 -- # sort 00:10:01.807 00:52:35 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:45083f67-cc4d-419c-80ff-17d8630ff1ca bdev_register:8c034bb7-b4c8-4dee-95ac-5dcbd79d382c bdev_register:a139095e-f4e1-47b2-9438-fc33ab82f857 bdev_register:6f82437d-f838-4443-87d2-6cefd7f034be 00:10:01.807 00:52:36 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:10:01.807 00:52:36 -- json_config/json_config.sh@75 -- # get_notifications 00:10:01.807 00:52:36 -- json_config/json_config.sh@75 -- # sort 00:10:01.807 00:52:36 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:10:01.807 00:52:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:01.807 00:52:36 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:01.807 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:01.807 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:02.066 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:02.066 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:45083f67-cc4d-419c-80ff-17d8630ff1ca 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:02.066 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:8c034bb7-b4c8-4dee-95ac-5dcbd79d382c 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:02.066 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:a139095e-f4e1-47b2-9438-fc33ab82f857 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:02.066 00:52:36 -- json_config/json_config.sh@65 -- # echo bdev_register:6f82437d-f838-4443-87d2-6cefd7f034be 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # IFS=: 00:10:02.066 00:52:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:02.066 00:52:36 -- json_config/json_config.sh@77 -- # [[ bdev_register:45083f67-cc4d-419c-80ff-17d8630ff1ca bdev_register:6f82437d-f838-4443-87d2-6cefd7f034be bdev_register:8c034bb7-b4c8-4dee-95ac-5dcbd79d382c bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a139095e-f4e1-47b2-9438-fc33ab82f857 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\5\0\8\3\f\6\7\-\c\c\4\d\-\4\1\9\c\-\8\0\f\f\-\1\7\d\8\6\3\0\f\f\1\c\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\f\8\2\4\3\7\d\-\f\8\3\8\-\4\4\4\3\-\8\7\d\2\-\6\c\e\f\d\7\f\0\3\4\b\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\c\0\3\4\b\b\7\-\b\4\c\8\-\4\d\e\e\-\9\5\a\c\-\5\d\c\b\d\7\9\d\3\8\2\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\1\3\9\0\9\5\e\-\f\4\e\1\-\4\7\b\2\-\9\4\3\8\-\f\c\3\3\a\b\8\2\f\8\5\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:10:02.066 00:52:36 -- json_config/json_config.sh@89 -- # cat 00:10:02.067 00:52:36 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:45083f67-cc4d-419c-80ff-17d8630ff1ca bdev_register:6f82437d-f838-4443-87d2-6cefd7f034be bdev_register:8c034bb7-b4c8-4dee-95ac-5dcbd79d382c bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a139095e-f4e1-47b2-9438-fc33ab82f857 bdev_register:aio_disk 00:10:02.067 Expected events matched: 00:10:02.067 bdev_register:45083f67-cc4d-419c-80ff-17d8630ff1ca 00:10:02.067 bdev_register:6f82437d-f838-4443-87d2-6cefd7f034be 00:10:02.067 bdev_register:8c034bb7-b4c8-4dee-95ac-5dcbd79d382c 00:10:02.067 bdev_register:Malloc0 00:10:02.067 bdev_register:Malloc0p0 00:10:02.067 bdev_register:Malloc0p1 00:10:02.067 bdev_register:Malloc0p2 00:10:02.067 bdev_register:Malloc1 00:10:02.067 bdev_register:Malloc3 00:10:02.067 bdev_register:Null0 00:10:02.067 bdev_register:Nvme0n1 00:10:02.067 bdev_register:Nvme0n1p0 00:10:02.067 bdev_register:Nvme0n1p1 00:10:02.067 bdev_register:PTBdevFromMalloc3 00:10:02.067 bdev_register:a139095e-f4e1-47b2-9438-fc33ab82f857 00:10:02.067 bdev_register:aio_disk 00:10:02.067 00:52:36 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:10:02.067 00:52:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.067 00:52:36 -- common/autotest_common.sh@10 -- # set +x 00:10:02.067 00:52:36 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:10:02.067 00:52:36 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:10:02.067 00:52:36 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:10:02.067 00:52:36 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:10:02.067 00:52:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.067 00:52:36 -- common/autotest_common.sh@10 -- # set +x 00:10:02.067 00:52:36 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:10:02.067 00:52:36 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:02.067 00:52:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:02.340 MallocBdevForConfigChangeCheck 00:10:02.340 00:52:36 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:10:02.340 00:52:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.340 00:52:36 -- common/autotest_common.sh@10 -- # set +x 00:10:02.340 00:52:36 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:10:02.340 00:52:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:02.629 INFO: shutting down applications... 00:10:02.629 00:52:36 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:10:02.629 00:52:36 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:10:02.629 00:52:36 -- json_config/json_config.sh@431 -- # json_config_clear target 00:10:02.629 00:52:36 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:10:02.629 00:52:36 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:02.888 [2024-11-18 00:52:37.069472] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:02.888 Calling clear_vhost_scsi_subsystem 00:10:02.888 Calling clear_iscsi_subsystem 00:10:02.888 Calling clear_vhost_blk_subsystem 00:10:02.888 Calling clear_nbd_subsystem 00:10:02.888 Calling clear_nvmf_subsystem 00:10:02.888 Calling clear_bdev_subsystem 00:10:02.888 Calling clear_accel_subsystem 00:10:02.888 Calling clear_iobuf_subsystem 00:10:02.888 Calling clear_sock_subsystem 00:10:02.888 Calling clear_vmd_subsystem 00:10:02.888 Calling clear_scheduler_subsystem 00:10:02.888 00:52:37 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:02.888 00:52:37 -- json_config/json_config.sh@396 -- # count=100 00:10:02.888 00:52:37 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:10:02.888 00:52:37 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:02.888 00:52:37 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:02.888 00:52:37 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:03.456 00:52:37 -- json_config/json_config.sh@398 -- # break 00:10:03.456 00:52:37 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:10:03.456 00:52:37 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:10:03.456 00:52:37 -- json_config/json_config.sh@120 -- # local app=target 00:10:03.456 00:52:37 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:10:03.456 00:52:37 -- json_config/json_config.sh@124 -- # [[ -n 115305 ]] 00:10:03.456 00:52:37 -- json_config/json_config.sh@127 -- # kill -SIGINT 115305 00:10:03.456 00:52:37 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:10:03.456 00:52:37 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:10:03.456 00:52:37 -- json_config/json_config.sh@130 -- # kill -0 115305 00:10:03.456 00:52:37 -- json_config/json_config.sh@134 -- # sleep 0.5 00:10:03.715 00:52:38 -- json_config/json_config.sh@129 -- # (( i++ )) 00:10:03.715 00:52:38 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:10:03.715 00:52:38 -- json_config/json_config.sh@130 -- # kill -0 115305 00:10:03.715 00:52:38 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:10:03.715 00:52:38 -- json_config/json_config.sh@132 -- # break 00:10:03.715 SPDK target shutdown done 00:10:03.715 00:52:38 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:10:03.715 00:52:38 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:10:03.715 INFO: relaunching applications... 00:10:03.715 00:52:38 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:10:03.715 00:52:38 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:03.715 00:52:38 -- json_config/json_config.sh@98 -- # local app=target 00:10:03.715 00:52:38 -- json_config/json_config.sh@99 -- # shift 00:10:03.715 00:52:38 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:10:03.715 00:52:38 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:10:03.715 00:52:38 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:10:03.715 00:52:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:10:03.715 00:52:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:10:03.715 00:52:38 -- json_config/json_config.sh@111 -- # app_pid[$app]=115549 00:10:03.715 Waiting for target to run... 00:10:03.715 00:52:38 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:10:03.715 00:52:38 -- json_config/json_config.sh@114 -- # waitforlisten 115549 /var/tmp/spdk_tgt.sock 00:10:03.715 00:52:38 -- common/autotest_common.sh@829 -- # '[' -z 115549 ']' 00:10:03.715 00:52:38 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:03.715 00:52:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:03.715 00:52:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:03.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:03.715 00:52:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:03.715 00:52:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:03.715 00:52:38 -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 [2024-11-18 00:52:38.172332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:03.974 [2024-11-18 00:52:38.172556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115549 ] 00:10:04.542 [2024-11-18 00:52:38.724429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.542 [2024-11-18 00:52:38.771257] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.542 [2024-11-18 00:52:38.771476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.542 [2024-11-18 00:52:38.922865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:04.542 [2024-11-18 00:52:38.923007] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:04.542 [2024-11-18 00:52:38.930803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:04.542 [2024-11-18 00:52:38.930878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:04.542 [2024-11-18 00:52:38.938850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:04.542 [2024-11-18 00:52:38.938922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:04.542 [2024-11-18 00:52:38.938975] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:04.800 [2024-11-18 00:52:39.025317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:04.800 [2024-11-18 00:52:39.025405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.800 [2024-11-18 00:52:39.025435] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:04.800 [2024-11-18 00:52:39.025473] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.800 [2024-11-18 00:52:39.025987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.800 [2024-11-18 00:52:39.026029] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:05.366 00:52:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.366 00:52:39 -- common/autotest_common.sh@862 -- # return 0 00:10:05.366 00:10:05.366 INFO: Checking if target configuration is the same... 00:10:05.366 00:52:39 -- json_config/json_config.sh@115 -- # echo '' 00:10:05.366 00:52:39 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:10:05.366 00:52:39 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:05.366 00:52:39 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:05.366 00:52:39 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:10:05.366 00:52:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:05.366 + '[' 2 -ne 2 ']' 00:10:05.366 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:05.366 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:05.366 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:05.366 +++ basename /dev/fd/62 00:10:05.366 ++ mktemp /tmp/62.XXX 00:10:05.366 + tmp_file_1=/tmp/62.IXX 00:10:05.366 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:05.366 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:05.366 + tmp_file_2=/tmp/spdk_tgt_config.json.CbH 00:10:05.366 + ret=0 00:10:05.366 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:05.934 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:05.934 + diff -u /tmp/62.IXX /tmp/spdk_tgt_config.json.CbH 00:10:05.934 INFO: JSON config files are the same 00:10:05.934 + echo 'INFO: JSON config files are the same' 00:10:05.934 + rm /tmp/62.IXX /tmp/spdk_tgt_config.json.CbH 00:10:05.934 + exit 0 00:10:05.934 00:52:40 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:10:05.934 INFO: changing configuration and checking if this can be detected... 00:10:05.934 00:52:40 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:05.934 00:52:40 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:05.934 00:52:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:06.193 00:52:40 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:06.193 00:52:40 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:10:06.193 00:52:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:06.193 + '[' 2 -ne 2 ']' 00:10:06.193 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:06.193 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:06.193 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:06.193 +++ basename /dev/fd/62 00:10:06.193 ++ mktemp /tmp/62.XXX 00:10:06.193 + tmp_file_1=/tmp/62.8ad 00:10:06.193 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:06.193 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:06.193 + tmp_file_2=/tmp/spdk_tgt_config.json.Goo 00:10:06.193 + ret=0 00:10:06.193 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:06.452 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:06.452 + diff -u /tmp/62.8ad /tmp/spdk_tgt_config.json.Goo 00:10:06.452 + ret=1 00:10:06.452 + echo '=== Start of file: /tmp/62.8ad ===' 00:10:06.452 + cat /tmp/62.8ad 00:10:06.452 + echo '=== End of file: /tmp/62.8ad ===' 00:10:06.452 + echo '' 00:10:06.452 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Goo ===' 00:10:06.452 + cat /tmp/spdk_tgt_config.json.Goo 00:10:06.452 + echo '=== End of file: /tmp/spdk_tgt_config.json.Goo ===' 00:10:06.452 + echo '' 00:10:06.452 + rm /tmp/62.8ad /tmp/spdk_tgt_config.json.Goo 00:10:06.452 + exit 1 00:10:06.452 INFO: configuration change detected. 00:10:06.452 00:52:40 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:10:06.452 00:52:40 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:10:06.452 00:52:40 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:10:06.452 00:52:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.452 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:10:06.452 00:52:40 -- json_config/json_config.sh@360 -- # local ret=0 00:10:06.452 00:52:40 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:10:06.452 00:52:40 -- json_config/json_config.sh@370 -- # [[ -n 115549 ]] 00:10:06.452 00:52:40 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:10:06.452 00:52:40 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:10:06.452 00:52:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.452 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:10:06.452 00:52:40 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:10:06.452 00:52:40 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:06.452 00:52:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:06.711 00:52:41 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:06.711 00:52:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:06.970 00:52:41 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:06.970 00:52:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:07.229 00:52:41 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:07.229 00:52:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:07.488 00:52:41 -- json_config/json_config.sh@246 -- # uname -s 00:10:07.488 00:52:41 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:10:07.488 00:52:41 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:10:07.488 00:52:41 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:10:07.488 00:52:41 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:10:07.488 00:52:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.488 00:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:07.488 00:52:41 -- json_config/json_config.sh@376 -- # killprocess 115549 00:10:07.488 00:52:41 -- common/autotest_common.sh@936 -- # '[' -z 115549 ']' 00:10:07.488 00:52:41 -- common/autotest_common.sh@940 -- # kill -0 115549 00:10:07.488 00:52:41 -- common/autotest_common.sh@941 -- # uname 00:10:07.488 00:52:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.488 00:52:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115549 00:10:07.488 00:52:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.488 00:52:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.488 killing process with pid 115549 00:10:07.488 00:52:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115549' 00:10:07.489 00:52:41 -- common/autotest_common.sh@955 -- # kill 115549 00:10:07.489 00:52:41 -- common/autotest_common.sh@960 -- # wait 115549 00:10:08.057 00:52:42 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:08.057 00:52:42 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:10:08.057 00:52:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.057 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:08.057 00:52:42 -- json_config/json_config.sh@381 -- # return 0 00:10:08.057 INFO: Success 00:10:08.057 00:52:42 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:10:08.057 00:10:08.057 real 0m11.734s 00:10:08.057 user 0m16.678s 00:10:08.057 sys 0m3.197s 00:10:08.057 00:52:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.057 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:08.057 ************************************ 00:10:08.057 END TEST json_config 00:10:08.057 ************************************ 00:10:08.057 00:52:42 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:08.057 00:52:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:08.058 00:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.058 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:08.058 ************************************ 00:10:08.058 START TEST json_config_extra_key 00:10:08.058 ************************************ 00:10:08.058 00:52:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:08.058 00:52:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:08.058 00:52:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:08.058 00:52:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:08.317 00:52:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:08.317 00:52:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:08.317 00:52:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:08.317 00:52:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:08.317 00:52:42 -- scripts/common.sh@335 -- # IFS=.-: 00:10:08.317 00:52:42 -- scripts/common.sh@335 -- # read -ra ver1 00:10:08.317 00:52:42 -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.317 00:52:42 -- scripts/common.sh@336 -- # read -ra ver2 00:10:08.317 00:52:42 -- scripts/common.sh@337 -- # local 'op=<' 00:10:08.317 00:52:42 -- scripts/common.sh@339 -- # ver1_l=2 00:10:08.317 00:52:42 -- scripts/common.sh@340 -- # ver2_l=1 00:10:08.317 00:52:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:08.317 00:52:42 -- scripts/common.sh@343 -- # case "$op" in 00:10:08.317 00:52:42 -- scripts/common.sh@344 -- # : 1 00:10:08.317 00:52:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:08.317 00:52:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.317 00:52:42 -- scripts/common.sh@364 -- # decimal 1 00:10:08.317 00:52:42 -- scripts/common.sh@352 -- # local d=1 00:10:08.317 00:52:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.317 00:52:42 -- scripts/common.sh@354 -- # echo 1 00:10:08.317 00:52:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:08.317 00:52:42 -- scripts/common.sh@365 -- # decimal 2 00:10:08.317 00:52:42 -- scripts/common.sh@352 -- # local d=2 00:10:08.317 00:52:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.317 00:52:42 -- scripts/common.sh@354 -- # echo 2 00:10:08.317 00:52:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:08.317 00:52:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:08.317 00:52:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:08.317 00:52:42 -- scripts/common.sh@367 -- # return 0 00:10:08.317 00:52:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.317 00:52:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.317 --rc genhtml_branch_coverage=1 00:10:08.317 --rc genhtml_function_coverage=1 00:10:08.317 --rc genhtml_legend=1 00:10:08.317 --rc geninfo_all_blocks=1 00:10:08.317 --rc geninfo_unexecuted_blocks=1 00:10:08.317 00:10:08.317 ' 00:10:08.317 00:52:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.317 --rc genhtml_branch_coverage=1 00:10:08.317 --rc genhtml_function_coverage=1 00:10:08.317 --rc genhtml_legend=1 00:10:08.317 --rc geninfo_all_blocks=1 00:10:08.317 --rc geninfo_unexecuted_blocks=1 00:10:08.317 00:10:08.317 ' 00:10:08.317 00:52:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.317 --rc genhtml_branch_coverage=1 00:10:08.317 --rc genhtml_function_coverage=1 00:10:08.317 --rc genhtml_legend=1 00:10:08.317 --rc geninfo_all_blocks=1 00:10:08.317 --rc geninfo_unexecuted_blocks=1 00:10:08.317 00:10:08.318 ' 00:10:08.318 00:52:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:08.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.318 --rc genhtml_branch_coverage=1 00:10:08.318 --rc genhtml_function_coverage=1 00:10:08.318 --rc genhtml_legend=1 00:10:08.318 --rc geninfo_all_blocks=1 00:10:08.318 --rc geninfo_unexecuted_blocks=1 00:10:08.318 00:10:08.318 ' 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.318 00:52:42 -- nvmf/common.sh@7 -- # uname -s 00:10:08.318 00:52:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.318 00:52:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.318 00:52:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.318 00:52:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.318 00:52:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.318 00:52:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.318 00:52:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.318 00:52:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.318 00:52:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.318 00:52:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.318 00:52:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6eb0b09e-8d6a-41d6-ace9-1878a4a8d81a 00:10:08.318 00:52:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=6eb0b09e-8d6a-41d6-ace9-1878a4a8d81a 00:10:08.318 00:52:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.318 00:52:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.318 00:52:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:08.318 00:52:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.318 00:52:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.318 00:52:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.318 00:52:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.318 00:52:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:08.318 00:52:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:08.318 00:52:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:08.318 00:52:42 -- paths/export.sh@5 -- # export PATH 00:10:08.318 00:52:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:08.318 00:52:42 -- nvmf/common.sh@46 -- # : 0 00:10:08.318 00:52:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:08.318 00:52:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:08.318 00:52:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:08.318 00:52:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.318 00:52:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.318 00:52:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:08.318 00:52:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:08.318 00:52:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:08.318 INFO: launching applications... 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@25 -- # shift 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=115727 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:10:08.318 Waiting for target to run... 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 115727 /var/tmp/spdk_tgt.sock 00:10:08.318 00:52:42 -- common/autotest_common.sh@829 -- # '[' -z 115727 ']' 00:10:08.318 00:52:42 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:08.318 00:52:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:08.318 00:52:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:08.318 00:52:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:08.318 00:52:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.318 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:08.318 [2024-11-18 00:52:42.595030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:08.318 [2024-11-18 00:52:42.595324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115727 ] 00:10:08.887 [2024-11-18 00:52:43.188337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.887 [2024-11-18 00:52:43.231225] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:08.887 [2024-11-18 00:52:43.231430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.146 00:52:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.146 00:52:43 -- common/autotest_common.sh@862 -- # return 0 00:10:09.146 00:10:09.146 INFO: shutting down applications... 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 115727 ]] 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 115727 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115727 00:10:09.146 00:52:43 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:10:09.714 00:52:44 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:10:09.715 00:52:44 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:10:09.715 00:52:44 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115727 00:10:09.715 00:52:44 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115727 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@52 -- # break 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:10:10.282 SPDK target shutdown done 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:10:10.282 Success 00:10:10.282 00:52:44 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:10:10.282 00:10:10.282 real 0m2.212s 00:10:10.282 user 0m1.587s 00:10:10.282 sys 0m0.738s 00:10:10.282 00:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:10.282 00:52:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.282 ************************************ 00:10:10.282 END TEST json_config_extra_key 00:10:10.282 ************************************ 00:10:10.282 00:52:44 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:10.282 00:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:10.282 00:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:10.282 00:52:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.282 ************************************ 00:10:10.282 START TEST alias_rpc 00:10:10.282 ************************************ 00:10:10.282 00:52:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:10.542 * Looking for test storage... 00:10:10.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:10.542 00:52:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:10.542 00:52:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:10.542 00:52:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:10.542 00:52:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:10.542 00:52:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:10.542 00:52:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:10.542 00:52:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:10.542 00:52:44 -- scripts/common.sh@335 -- # IFS=.-: 00:10:10.542 00:52:44 -- scripts/common.sh@335 -- # read -ra ver1 00:10:10.542 00:52:44 -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.542 00:52:44 -- scripts/common.sh@336 -- # read -ra ver2 00:10:10.542 00:52:44 -- scripts/common.sh@337 -- # local 'op=<' 00:10:10.542 00:52:44 -- scripts/common.sh@339 -- # ver1_l=2 00:10:10.542 00:52:44 -- scripts/common.sh@340 -- # ver2_l=1 00:10:10.542 00:52:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:10.542 00:52:44 -- scripts/common.sh@343 -- # case "$op" in 00:10:10.542 00:52:44 -- scripts/common.sh@344 -- # : 1 00:10:10.542 00:52:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:10.542 00:52:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.542 00:52:44 -- scripts/common.sh@364 -- # decimal 1 00:10:10.542 00:52:44 -- scripts/common.sh@352 -- # local d=1 00:10:10.542 00:52:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.542 00:52:44 -- scripts/common.sh@354 -- # echo 1 00:10:10.542 00:52:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:10.542 00:52:44 -- scripts/common.sh@365 -- # decimal 2 00:10:10.542 00:52:44 -- scripts/common.sh@352 -- # local d=2 00:10:10.542 00:52:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.542 00:52:44 -- scripts/common.sh@354 -- # echo 2 00:10:10.542 00:52:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:10.542 00:52:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:10.542 00:52:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:10.542 00:52:44 -- scripts/common.sh@367 -- # return 0 00:10:10.542 00:52:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.542 00:52:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 00:52:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 00:52:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 00:52:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 00:52:44 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:10.542 00:52:44 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=115818 00:10:10.542 00:52:44 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:10.542 00:52:44 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 115818 00:10:10.542 00:52:44 -- common/autotest_common.sh@829 -- # '[' -z 115818 ']' 00:10:10.542 00:52:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.542 00:52:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.542 00:52:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.542 00:52:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.542 00:52:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.542 [2024-11-18 00:52:44.879603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:10.542 [2024-11-18 00:52:44.879888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115818 ] 00:10:10.801 [2024-11-18 00:52:45.035490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.801 [2024-11-18 00:52:45.105464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:10.801 [2024-11-18 00:52:45.105696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.369 00:52:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.369 00:52:45 -- common/autotest_common.sh@862 -- # return 0 00:10:11.369 00:52:45 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:11.628 00:52:45 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 115818 00:10:11.628 00:52:46 -- common/autotest_common.sh@936 -- # '[' -z 115818 ']' 00:10:11.628 00:52:46 -- common/autotest_common.sh@940 -- # kill -0 115818 00:10:11.628 00:52:46 -- common/autotest_common.sh@941 -- # uname 00:10:11.628 00:52:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.628 00:52:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115818 00:10:11.628 00:52:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:11.628 00:52:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:11.628 00:52:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115818' 00:10:11.958 killing process with pid 115818 00:10:11.958 00:52:46 -- common/autotest_common.sh@955 -- # kill 115818 00:10:11.958 00:52:46 -- common/autotest_common.sh@960 -- # wait 115818 00:10:12.539 ************************************ 00:10:12.539 END TEST alias_rpc 00:10:12.539 ************************************ 00:10:12.539 00:10:12.539 real 0m2.097s 00:10:12.539 user 0m2.021s 00:10:12.539 sys 0m0.676s 00:10:12.539 00:52:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:12.539 00:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.539 00:52:46 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:10:12.539 00:52:46 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:12.539 00:52:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:12.539 00:52:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.539 00:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.539 ************************************ 00:10:12.539 START TEST spdkcli_tcp 00:10:12.539 ************************************ 00:10:12.539 00:52:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:12.539 * Looking for test storage... 00:10:12.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:12.539 00:52:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:12.539 00:52:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:12.539 00:52:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:12.799 00:52:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:12.799 00:52:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:12.799 00:52:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:12.799 00:52:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:12.799 00:52:46 -- scripts/common.sh@335 -- # IFS=.-: 00:10:12.799 00:52:46 -- scripts/common.sh@335 -- # read -ra ver1 00:10:12.799 00:52:46 -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.799 00:52:46 -- scripts/common.sh@336 -- # read -ra ver2 00:10:12.799 00:52:46 -- scripts/common.sh@337 -- # local 'op=<' 00:10:12.799 00:52:46 -- scripts/common.sh@339 -- # ver1_l=2 00:10:12.799 00:52:46 -- scripts/common.sh@340 -- # ver2_l=1 00:10:12.799 00:52:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:12.799 00:52:46 -- scripts/common.sh@343 -- # case "$op" in 00:10:12.799 00:52:46 -- scripts/common.sh@344 -- # : 1 00:10:12.799 00:52:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:12.799 00:52:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.799 00:52:46 -- scripts/common.sh@364 -- # decimal 1 00:10:12.799 00:52:46 -- scripts/common.sh@352 -- # local d=1 00:10:12.799 00:52:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.799 00:52:46 -- scripts/common.sh@354 -- # echo 1 00:10:12.799 00:52:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:12.799 00:52:46 -- scripts/common.sh@365 -- # decimal 2 00:10:12.799 00:52:46 -- scripts/common.sh@352 -- # local d=2 00:10:12.799 00:52:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.799 00:52:46 -- scripts/common.sh@354 -- # echo 2 00:10:12.799 00:52:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:12.799 00:52:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:12.799 00:52:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:12.799 00:52:46 -- scripts/common.sh@367 -- # return 0 00:10:12.799 00:52:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.799 00:52:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.799 --rc genhtml_branch_coverage=1 00:10:12.799 --rc genhtml_function_coverage=1 00:10:12.799 --rc genhtml_legend=1 00:10:12.799 --rc geninfo_all_blocks=1 00:10:12.799 --rc geninfo_unexecuted_blocks=1 00:10:12.799 00:10:12.799 ' 00:10:12.799 00:52:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.799 --rc genhtml_branch_coverage=1 00:10:12.799 --rc genhtml_function_coverage=1 00:10:12.799 --rc genhtml_legend=1 00:10:12.799 --rc geninfo_all_blocks=1 00:10:12.799 --rc geninfo_unexecuted_blocks=1 00:10:12.799 00:10:12.799 ' 00:10:12.799 00:52:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.799 --rc genhtml_branch_coverage=1 00:10:12.799 --rc genhtml_function_coverage=1 00:10:12.799 --rc genhtml_legend=1 00:10:12.799 --rc geninfo_all_blocks=1 00:10:12.799 --rc geninfo_unexecuted_blocks=1 00:10:12.799 00:10:12.799 ' 00:10:12.799 00:52:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.799 --rc genhtml_branch_coverage=1 00:10:12.799 --rc genhtml_function_coverage=1 00:10:12.799 --rc genhtml_legend=1 00:10:12.799 --rc geninfo_all_blocks=1 00:10:12.799 --rc geninfo_unexecuted_blocks=1 00:10:12.799 00:10:12.799 ' 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:12.799 00:52:46 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:12.799 00:52:46 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:12.799 00:52:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.799 00:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=115911 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@27 -- # waitforlisten 115911 00:10:12.799 00:52:46 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:12.799 00:52:46 -- common/autotest_common.sh@829 -- # '[' -z 115911 ']' 00:10:12.799 00:52:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.799 00:52:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.799 00:52:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.799 00:52:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.799 00:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.799 [2024-11-18 00:52:47.045305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:12.799 [2024-11-18 00:52:47.045485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115911 ] 00:10:12.799 [2024-11-18 00:52:47.191580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.058 [2024-11-18 00:52:47.267887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:13.058 [2024-11-18 00:52:47.268513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.058 [2024-11-18 00:52:47.268515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.623 00:52:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.623 00:52:47 -- common/autotest_common.sh@862 -- # return 0 00:10:13.623 00:52:47 -- spdkcli/tcp.sh@31 -- # socat_pid=115933 00:10:13.623 00:52:47 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:13.623 00:52:47 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:13.882 [ 00:10:13.882 "spdk_get_version", 00:10:13.882 "rpc_get_methods", 00:10:13.882 "trace_get_info", 00:10:13.882 "trace_get_tpoint_group_mask", 00:10:13.882 "trace_disable_tpoint_group", 00:10:13.882 "trace_enable_tpoint_group", 00:10:13.882 "trace_clear_tpoint_mask", 00:10:13.882 "trace_set_tpoint_mask", 00:10:13.882 "framework_get_pci_devices", 00:10:13.882 "framework_get_config", 00:10:13.882 "framework_get_subsystems", 00:10:13.882 "iobuf_get_stats", 00:10:13.882 "iobuf_set_options", 00:10:13.882 "sock_set_default_impl", 00:10:13.882 "sock_impl_set_options", 00:10:13.882 "sock_impl_get_options", 00:10:13.882 "vmd_rescan", 00:10:13.882 "vmd_remove_device", 00:10:13.882 "vmd_enable", 00:10:13.882 "accel_get_stats", 00:10:13.882 "accel_set_options", 00:10:13.882 "accel_set_driver", 00:10:13.882 "accel_crypto_key_destroy", 00:10:13.882 "accel_crypto_keys_get", 00:10:13.882 "accel_crypto_key_create", 00:10:13.882 "accel_assign_opc", 00:10:13.882 "accel_get_module_info", 00:10:13.882 "accel_get_opc_assignments", 00:10:13.882 "notify_get_notifications", 00:10:13.882 "notify_get_types", 00:10:13.882 "bdev_get_histogram", 00:10:13.882 "bdev_enable_histogram", 00:10:13.882 "bdev_set_qos_limit", 00:10:13.882 "bdev_set_qd_sampling_period", 00:10:13.882 "bdev_get_bdevs", 00:10:13.882 "bdev_reset_iostat", 00:10:13.882 "bdev_get_iostat", 00:10:13.882 "bdev_examine", 00:10:13.882 "bdev_wait_for_examine", 00:10:13.882 "bdev_set_options", 00:10:13.882 "scsi_get_devices", 00:10:13.882 "thread_set_cpumask", 00:10:13.882 "framework_get_scheduler", 00:10:13.882 "framework_set_scheduler", 00:10:13.882 "framework_get_reactors", 00:10:13.882 "thread_get_io_channels", 00:10:13.882 "thread_get_pollers", 00:10:13.882 "thread_get_stats", 00:10:13.882 "framework_monitor_context_switch", 00:10:13.882 "spdk_kill_instance", 00:10:13.882 "log_enable_timestamps", 00:10:13.882 "log_get_flags", 00:10:13.882 "log_clear_flag", 00:10:13.882 "log_set_flag", 00:10:13.882 "log_get_level", 00:10:13.882 "log_set_level", 00:10:13.882 "log_get_print_level", 00:10:13.882 "log_set_print_level", 00:10:13.882 "framework_enable_cpumask_locks", 00:10:13.882 "framework_disable_cpumask_locks", 00:10:13.882 "framework_wait_init", 00:10:13.882 "framework_start_init", 00:10:13.882 "virtio_blk_create_transport", 00:10:13.882 "virtio_blk_get_transports", 00:10:13.882 "vhost_controller_set_coalescing", 00:10:13.882 "vhost_get_controllers", 00:10:13.882 "vhost_delete_controller", 00:10:13.882 "vhost_create_blk_controller", 00:10:13.882 "vhost_scsi_controller_remove_target", 00:10:13.882 "vhost_scsi_controller_add_target", 00:10:13.882 "vhost_start_scsi_controller", 00:10:13.882 "vhost_create_scsi_controller", 00:10:13.882 "nbd_get_disks", 00:10:13.882 "nbd_stop_disk", 00:10:13.882 "nbd_start_disk", 00:10:13.882 "env_dpdk_get_mem_stats", 00:10:13.882 "nvmf_subsystem_get_listeners", 00:10:13.882 "nvmf_subsystem_get_qpairs", 00:10:13.882 "nvmf_subsystem_get_controllers", 00:10:13.882 "nvmf_get_stats", 00:10:13.882 "nvmf_get_transports", 00:10:13.882 "nvmf_create_transport", 00:10:13.882 "nvmf_get_targets", 00:10:13.882 "nvmf_delete_target", 00:10:13.882 "nvmf_create_target", 00:10:13.882 "nvmf_subsystem_allow_any_host", 00:10:13.882 "nvmf_subsystem_remove_host", 00:10:13.882 "nvmf_subsystem_add_host", 00:10:13.882 "nvmf_subsystem_remove_ns", 00:10:13.882 "nvmf_subsystem_add_ns", 00:10:13.882 "nvmf_subsystem_listener_set_ana_state", 00:10:13.882 "nvmf_discovery_get_referrals", 00:10:13.882 "nvmf_discovery_remove_referral", 00:10:13.882 "nvmf_discovery_add_referral", 00:10:13.882 "nvmf_subsystem_remove_listener", 00:10:13.882 "nvmf_subsystem_add_listener", 00:10:13.882 "nvmf_delete_subsystem", 00:10:13.882 "nvmf_create_subsystem", 00:10:13.882 "nvmf_get_subsystems", 00:10:13.882 "nvmf_set_crdt", 00:10:13.882 "nvmf_set_config", 00:10:13.882 "nvmf_set_max_subsystems", 00:10:13.882 "iscsi_set_options", 00:10:13.882 "iscsi_get_auth_groups", 00:10:13.882 "iscsi_auth_group_remove_secret", 00:10:13.882 "iscsi_auth_group_add_secret", 00:10:13.882 "iscsi_delete_auth_group", 00:10:13.882 "iscsi_create_auth_group", 00:10:13.882 "iscsi_set_discovery_auth", 00:10:13.882 "iscsi_get_options", 00:10:13.882 "iscsi_target_node_request_logout", 00:10:13.882 "iscsi_target_node_set_redirect", 00:10:13.882 "iscsi_target_node_set_auth", 00:10:13.882 "iscsi_target_node_add_lun", 00:10:13.882 "iscsi_get_connections", 00:10:13.882 "iscsi_portal_group_set_auth", 00:10:13.882 "iscsi_start_portal_group", 00:10:13.882 "iscsi_delete_portal_group", 00:10:13.882 "iscsi_create_portal_group", 00:10:13.882 "iscsi_get_portal_groups", 00:10:13.883 "iscsi_delete_target_node", 00:10:13.883 "iscsi_target_node_remove_pg_ig_maps", 00:10:13.883 "iscsi_target_node_add_pg_ig_maps", 00:10:13.883 "iscsi_create_target_node", 00:10:13.883 "iscsi_get_target_nodes", 00:10:13.883 "iscsi_delete_initiator_group", 00:10:13.883 "iscsi_initiator_group_remove_initiators", 00:10:13.883 "iscsi_initiator_group_add_initiators", 00:10:13.883 "iscsi_create_initiator_group", 00:10:13.883 "iscsi_get_initiator_groups", 00:10:13.883 "iaa_scan_accel_module", 00:10:13.883 "dsa_scan_accel_module", 00:10:13.883 "ioat_scan_accel_module", 00:10:13.883 "accel_error_inject_error", 00:10:13.883 "bdev_iscsi_delete", 00:10:13.883 "bdev_iscsi_create", 00:10:13.883 "bdev_iscsi_set_options", 00:10:13.883 "bdev_virtio_attach_controller", 00:10:13.883 "bdev_virtio_scsi_get_devices", 00:10:13.883 "bdev_virtio_detach_controller", 00:10:13.883 "bdev_virtio_blk_set_hotplug", 00:10:13.883 "bdev_ftl_set_property", 00:10:13.883 "bdev_ftl_get_properties", 00:10:13.883 "bdev_ftl_get_stats", 00:10:13.883 "bdev_ftl_unmap", 00:10:13.883 "bdev_ftl_unload", 00:10:13.883 "bdev_ftl_delete", 00:10:13.883 "bdev_ftl_load", 00:10:13.883 "bdev_ftl_create", 00:10:13.883 "bdev_aio_delete", 00:10:13.883 "bdev_aio_rescan", 00:10:13.883 "bdev_aio_create", 00:10:13.883 "blobfs_create", 00:10:13.883 "blobfs_detect", 00:10:13.883 "blobfs_set_cache_size", 00:10:13.883 "bdev_zone_block_delete", 00:10:13.883 "bdev_zone_block_create", 00:10:13.883 "bdev_delay_delete", 00:10:13.883 "bdev_delay_create", 00:10:13.883 "bdev_delay_update_latency", 00:10:13.883 "bdev_split_delete", 00:10:13.883 "bdev_split_create", 00:10:13.883 "bdev_error_inject_error", 00:10:13.883 "bdev_error_delete", 00:10:13.883 "bdev_error_create", 00:10:13.883 "bdev_raid_set_options", 00:10:13.883 "bdev_raid_remove_base_bdev", 00:10:13.883 "bdev_raid_add_base_bdev", 00:10:13.883 "bdev_raid_delete", 00:10:13.883 "bdev_raid_create", 00:10:13.883 "bdev_raid_get_bdevs", 00:10:13.883 "bdev_lvol_grow_lvstore", 00:10:13.883 "bdev_lvol_get_lvols", 00:10:13.883 "bdev_lvol_get_lvstores", 00:10:13.883 "bdev_lvol_delete", 00:10:13.883 "bdev_lvol_set_read_only", 00:10:13.883 "bdev_lvol_resize", 00:10:13.883 "bdev_lvol_decouple_parent", 00:10:13.883 "bdev_lvol_inflate", 00:10:13.883 "bdev_lvol_rename", 00:10:13.883 "bdev_lvol_clone_bdev", 00:10:13.883 "bdev_lvol_clone", 00:10:13.883 "bdev_lvol_snapshot", 00:10:13.883 "bdev_lvol_create", 00:10:13.883 "bdev_lvol_delete_lvstore", 00:10:13.883 "bdev_lvol_rename_lvstore", 00:10:13.883 "bdev_lvol_create_lvstore", 00:10:13.883 "bdev_passthru_delete", 00:10:13.883 "bdev_passthru_create", 00:10:13.883 "bdev_nvme_cuse_unregister", 00:10:13.883 "bdev_nvme_cuse_register", 00:10:13.883 "bdev_opal_new_user", 00:10:13.883 "bdev_opal_set_lock_state", 00:10:13.883 "bdev_opal_delete", 00:10:13.883 "bdev_opal_get_info", 00:10:13.883 "bdev_opal_create", 00:10:13.883 "bdev_nvme_opal_revert", 00:10:13.883 "bdev_nvme_opal_init", 00:10:13.883 "bdev_nvme_send_cmd", 00:10:13.883 "bdev_nvme_get_path_iostat", 00:10:13.883 "bdev_nvme_get_mdns_discovery_info", 00:10:13.883 "bdev_nvme_stop_mdns_discovery", 00:10:13.883 "bdev_nvme_start_mdns_discovery", 00:10:13.883 "bdev_nvme_set_multipath_policy", 00:10:13.883 "bdev_nvme_set_preferred_path", 00:10:13.883 "bdev_nvme_get_io_paths", 00:10:13.883 "bdev_nvme_remove_error_injection", 00:10:13.883 "bdev_nvme_add_error_injection", 00:10:13.883 "bdev_nvme_get_discovery_info", 00:10:13.883 "bdev_nvme_stop_discovery", 00:10:13.883 "bdev_nvme_start_discovery", 00:10:13.883 "bdev_nvme_get_controller_health_info", 00:10:13.883 "bdev_nvme_disable_controller", 00:10:13.883 "bdev_nvme_enable_controller", 00:10:13.883 "bdev_nvme_reset_controller", 00:10:13.883 "bdev_nvme_get_transport_statistics", 00:10:13.883 "bdev_nvme_apply_firmware", 00:10:13.883 "bdev_nvme_detach_controller", 00:10:13.883 "bdev_nvme_get_controllers", 00:10:13.883 "bdev_nvme_attach_controller", 00:10:13.883 "bdev_nvme_set_hotplug", 00:10:13.883 "bdev_nvme_set_options", 00:10:13.883 "bdev_null_resize", 00:10:13.883 "bdev_null_delete", 00:10:13.883 "bdev_null_create", 00:10:13.883 "bdev_malloc_delete", 00:10:13.883 "bdev_malloc_create" 00:10:13.883 ] 00:10:13.883 00:52:48 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:13.883 00:52:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.883 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:13.883 00:52:48 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:13.883 00:52:48 -- spdkcli/tcp.sh@38 -- # killprocess 115911 00:10:13.883 00:52:48 -- common/autotest_common.sh@936 -- # '[' -z 115911 ']' 00:10:13.883 00:52:48 -- common/autotest_common.sh@940 -- # kill -0 115911 00:10:13.883 00:52:48 -- common/autotest_common.sh@941 -- # uname 00:10:13.883 00:52:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.883 00:52:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115911 00:10:13.883 00:52:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:13.883 00:52:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:13.883 00:52:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115911' 00:10:13.883 killing process with pid 115911 00:10:13.883 00:52:48 -- common/autotest_common.sh@955 -- # kill 115911 00:10:13.883 00:52:48 -- common/autotest_common.sh@960 -- # wait 115911 00:10:14.450 00:10:14.450 real 0m2.080s 00:10:14.450 user 0m3.378s 00:10:14.450 sys 0m0.662s 00:10:14.450 00:52:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.450 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:14.450 ************************************ 00:10:14.450 END TEST spdkcli_tcp 00:10:14.450 ************************************ 00:10:14.710 00:52:48 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:14.710 00:52:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:14.710 00:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.710 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:14.710 ************************************ 00:10:14.710 START TEST dpdk_mem_utility 00:10:14.710 ************************************ 00:10:14.710 00:52:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:14.710 * Looking for test storage... 00:10:14.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:14.710 00:52:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:14.710 00:52:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:14.710 00:52:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:14.710 00:52:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:14.710 00:52:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:14.710 00:52:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:14.710 00:52:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:14.710 00:52:49 -- scripts/common.sh@335 -- # IFS=.-: 00:10:14.710 00:52:49 -- scripts/common.sh@335 -- # read -ra ver1 00:10:14.710 00:52:49 -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.710 00:52:49 -- scripts/common.sh@336 -- # read -ra ver2 00:10:14.710 00:52:49 -- scripts/common.sh@337 -- # local 'op=<' 00:10:14.710 00:52:49 -- scripts/common.sh@339 -- # ver1_l=2 00:10:14.710 00:52:49 -- scripts/common.sh@340 -- # ver2_l=1 00:10:14.710 00:52:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:14.710 00:52:49 -- scripts/common.sh@343 -- # case "$op" in 00:10:14.710 00:52:49 -- scripts/common.sh@344 -- # : 1 00:10:14.710 00:52:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:14.710 00:52:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.710 00:52:49 -- scripts/common.sh@364 -- # decimal 1 00:10:14.710 00:52:49 -- scripts/common.sh@352 -- # local d=1 00:10:14.710 00:52:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.710 00:52:49 -- scripts/common.sh@354 -- # echo 1 00:10:14.710 00:52:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:14.710 00:52:49 -- scripts/common.sh@365 -- # decimal 2 00:10:14.710 00:52:49 -- scripts/common.sh@352 -- # local d=2 00:10:14.710 00:52:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.710 00:52:49 -- scripts/common.sh@354 -- # echo 2 00:10:14.710 00:52:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:14.710 00:52:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:14.710 00:52:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:14.710 00:52:49 -- scripts/common.sh@367 -- # return 0 00:10:14.710 00:52:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.710 00:52:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:14.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.710 --rc genhtml_branch_coverage=1 00:10:14.710 --rc genhtml_function_coverage=1 00:10:14.710 --rc genhtml_legend=1 00:10:14.710 --rc geninfo_all_blocks=1 00:10:14.710 --rc geninfo_unexecuted_blocks=1 00:10:14.710 00:10:14.710 ' 00:10:14.710 00:52:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:14.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.710 --rc genhtml_branch_coverage=1 00:10:14.710 --rc genhtml_function_coverage=1 00:10:14.710 --rc genhtml_legend=1 00:10:14.710 --rc geninfo_all_blocks=1 00:10:14.710 --rc geninfo_unexecuted_blocks=1 00:10:14.710 00:10:14.710 ' 00:10:14.710 00:52:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:14.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.710 --rc genhtml_branch_coverage=1 00:10:14.710 --rc genhtml_function_coverage=1 00:10:14.710 --rc genhtml_legend=1 00:10:14.710 --rc geninfo_all_blocks=1 00:10:14.710 --rc geninfo_unexecuted_blocks=1 00:10:14.710 00:10:14.710 ' 00:10:14.710 00:52:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:14.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.710 --rc genhtml_branch_coverage=1 00:10:14.710 --rc genhtml_function_coverage=1 00:10:14.710 --rc genhtml_legend=1 00:10:14.710 --rc geninfo_all_blocks=1 00:10:14.710 --rc geninfo_unexecuted_blocks=1 00:10:14.710 00:10:14.710 ' 00:10:14.710 00:52:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:14.710 00:52:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=116028 00:10:14.710 00:52:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 116028 00:10:14.710 00:52:49 -- common/autotest_common.sh@829 -- # '[' -z 116028 ']' 00:10:14.710 00:52:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.710 00:52:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.710 00:52:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.710 00:52:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.710 00:52:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.977 00:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:14.977 [2024-11-18 00:52:49.190590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:14.977 [2024-11-18 00:52:49.190864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116028 ] 00:10:14.977 [2024-11-18 00:52:49.347570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.239 [2024-11-18 00:52:49.427490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.239 [2024-11-18 00:52:49.427750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.807 00:52:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.807 00:52:50 -- common/autotest_common.sh@862 -- # return 0 00:10:15.807 00:52:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:15.807 00:52:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:15.807 00:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.807 00:52:50 -- common/autotest_common.sh@10 -- # set +x 00:10:15.807 { 00:10:15.807 "filename": "/tmp/spdk_mem_dump.txt" 00:10:15.807 } 00:10:15.807 00:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.807 00:52:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:15.807 DPDK memory size 814.000000 MiB in 1 heap(s) 00:10:15.807 1 heaps totaling size 814.000000 MiB 00:10:15.807 size: 814.000000 MiB heap id: 0 00:10:15.807 end heaps---------- 00:10:15.807 8 mempools totaling size 598.116089 MiB 00:10:15.807 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:15.807 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:15.807 size: 84.521057 MiB name: bdev_io_116028 00:10:15.807 size: 51.011292 MiB name: evtpool_116028 00:10:15.808 size: 50.003479 MiB name: msgpool_116028 00:10:15.808 size: 21.763794 MiB name: PDU_Pool 00:10:15.808 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:15.808 size: 0.026123 MiB name: Session_Pool 00:10:15.808 end mempools------- 00:10:15.808 6 memzones totaling size 4.142822 MiB 00:10:15.808 size: 1.000366 MiB name: RG_ring_0_116028 00:10:15.808 size: 1.000366 MiB name: RG_ring_1_116028 00:10:15.808 size: 1.000366 MiB name: RG_ring_4_116028 00:10:15.808 size: 1.000366 MiB name: RG_ring_5_116028 00:10:15.808 size: 0.125366 MiB name: RG_ring_2_116028 00:10:15.808 size: 0.015991 MiB name: RG_ring_3_116028 00:10:15.808 end memzones------- 00:10:15.808 00:52:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:15.808 heap id: 0 total size: 814.000000 MiB number of busy elements: 220 number of free elements: 15 00:10:15.808 list of free elements. size: 12.486572 MiB 00:10:15.808 element at address: 0x200000400000 with size: 1.999512 MiB 00:10:15.808 element at address: 0x200018e00000 with size: 0.999878 MiB 00:10:15.808 element at address: 0x200019000000 with size: 0.999878 MiB 00:10:15.808 element at address: 0x200003e00000 with size: 0.996277 MiB 00:10:15.808 element at address: 0x200031c00000 with size: 0.994446 MiB 00:10:15.808 element at address: 0x200013800000 with size: 0.978699 MiB 00:10:15.808 element at address: 0x200007000000 with size: 0.959839 MiB 00:10:15.808 element at address: 0x200019200000 with size: 0.936584 MiB 00:10:15.808 element at address: 0x200000200000 with size: 0.837219 MiB 00:10:15.808 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:10:15.808 element at address: 0x20000b200000 with size: 0.489807 MiB 00:10:15.808 element at address: 0x200000800000 with size: 0.486511 MiB 00:10:15.808 element at address: 0x200019400000 with size: 0.485657 MiB 00:10:15.808 element at address: 0x200027e00000 with size: 0.402527 MiB 00:10:15.808 element at address: 0x200003a00000 with size: 0.351318 MiB 00:10:15.808 list of standard malloc elements. size: 199.250854 MiB 00:10:15.808 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:10:15.808 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:10:15.808 element at address: 0x200018efff80 with size: 1.000122 MiB 00:10:15.808 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:10:15.808 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:15.808 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:15.808 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:10:15.808 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:15.808 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:10:15.808 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087c980 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003adb300 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003adb500 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003affa80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003affb40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:10:15.808 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:10:15.809 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e670c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e67180 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6dd80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:10:15.809 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:10:15.809 list of memzone associated elements. size: 602.262573 MiB 00:10:15.809 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:10:15.809 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:15.809 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:10:15.809 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:15.809 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:10:15.809 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_116028_0 00:10:15.809 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:10:15.809 associated memzone info: size: 48.002930 MiB name: MP_evtpool_116028_0 00:10:15.809 element at address: 0x200003fff380 with size: 48.003052 MiB 00:10:15.809 associated memzone info: size: 48.002930 MiB name: MP_msgpool_116028_0 00:10:15.809 element at address: 0x2000195be940 with size: 20.255554 MiB 00:10:15.809 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:15.809 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:10:15.809 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:15.809 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:10:15.809 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_116028 00:10:15.809 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:10:15.809 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_116028 00:10:15.809 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:15.809 associated memzone info: size: 1.007996 MiB name: MP_evtpool_116028 00:10:15.809 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:10:15.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:15.809 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:10:15.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:15.809 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:10:15.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:15.809 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:10:15.809 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:15.809 element at address: 0x200003eff180 with size: 1.000488 MiB 00:10:15.809 associated memzone info: size: 1.000366 MiB name: RG_ring_0_116028 00:10:15.809 element at address: 0x200003affc00 with size: 1.000488 MiB 00:10:15.809 associated memzone info: size: 1.000366 MiB name: RG_ring_1_116028 00:10:15.809 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:10:15.809 associated memzone info: size: 1.000366 MiB name: RG_ring_4_116028 00:10:15.809 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:10:15.809 associated memzone info: size: 1.000366 MiB name: RG_ring_5_116028 00:10:15.809 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:10:15.809 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_116028 00:10:15.809 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:10:15.809 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:15.809 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:10:15.809 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:15.809 element at address: 0x20001947c540 with size: 0.250488 MiB 00:10:15.809 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:15.809 element at address: 0x200003adf880 with size: 0.125488 MiB 00:10:15.809 associated memzone info: size: 0.125366 MiB name: RG_ring_2_116028 00:10:15.809 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:10:15.809 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:15.809 element at address: 0x200027e67240 with size: 0.023743 MiB 00:10:15.810 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:15.810 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:10:15.810 associated memzone info: size: 0.015991 MiB name: RG_ring_3_116028 00:10:15.810 element at address: 0x200027e6d380 with size: 0.002441 MiB 00:10:15.810 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:15.810 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:10:15.810 associated memzone info: size: 0.000183 MiB name: MP_msgpool_116028 00:10:15.810 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:10:15.810 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_116028 00:10:15.810 element at address: 0x200027e6de40 with size: 0.000305 MiB 00:10:15.810 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:15.810 00:52:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:15.810 00:52:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 116028 00:10:15.810 00:52:50 -- common/autotest_common.sh@936 -- # '[' -z 116028 ']' 00:10:15.810 00:52:50 -- common/autotest_common.sh@940 -- # kill -0 116028 00:10:15.810 00:52:50 -- common/autotest_common.sh@941 -- # uname 00:10:16.068 00:52:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:16.069 00:52:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116028 00:10:16.069 00:52:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:16.069 00:52:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:16.069 00:52:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116028' 00:10:16.069 killing process with pid 116028 00:10:16.069 00:52:50 -- common/autotest_common.sh@955 -- # kill 116028 00:10:16.069 00:52:50 -- common/autotest_common.sh@960 -- # wait 116028 00:10:16.636 00:10:16.636 real 0m1.983s 00:10:16.636 user 0m1.819s 00:10:16.636 sys 0m0.671s 00:10:16.636 00:52:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.636 00:52:50 -- common/autotest_common.sh@10 -- # set +x 00:10:16.636 ************************************ 00:10:16.636 END TEST dpdk_mem_utility 00:10:16.636 ************************************ 00:10:16.636 00:52:50 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:16.636 00:52:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:16.636 00:52:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.636 00:52:50 -- common/autotest_common.sh@10 -- # set +x 00:10:16.636 ************************************ 00:10:16.636 START TEST event 00:10:16.636 ************************************ 00:10:16.636 00:52:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:16.894 * Looking for test storage... 00:10:16.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:16.894 00:52:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:16.894 00:52:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:16.894 00:52:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:16.894 00:52:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:16.894 00:52:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:16.894 00:52:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:16.894 00:52:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:16.894 00:52:51 -- scripts/common.sh@335 -- # IFS=.-: 00:10:16.894 00:52:51 -- scripts/common.sh@335 -- # read -ra ver1 00:10:16.894 00:52:51 -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.894 00:52:51 -- scripts/common.sh@336 -- # read -ra ver2 00:10:16.894 00:52:51 -- scripts/common.sh@337 -- # local 'op=<' 00:10:16.894 00:52:51 -- scripts/common.sh@339 -- # ver1_l=2 00:10:16.894 00:52:51 -- scripts/common.sh@340 -- # ver2_l=1 00:10:16.894 00:52:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:16.894 00:52:51 -- scripts/common.sh@343 -- # case "$op" in 00:10:16.894 00:52:51 -- scripts/common.sh@344 -- # : 1 00:10:16.895 00:52:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:16.895 00:52:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.895 00:52:51 -- scripts/common.sh@364 -- # decimal 1 00:10:16.895 00:52:51 -- scripts/common.sh@352 -- # local d=1 00:10:16.895 00:52:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.895 00:52:51 -- scripts/common.sh@354 -- # echo 1 00:10:16.895 00:52:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:16.895 00:52:51 -- scripts/common.sh@365 -- # decimal 2 00:10:16.895 00:52:51 -- scripts/common.sh@352 -- # local d=2 00:10:16.895 00:52:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.895 00:52:51 -- scripts/common.sh@354 -- # echo 2 00:10:16.895 00:52:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:16.895 00:52:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:16.895 00:52:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:16.895 00:52:51 -- scripts/common.sh@367 -- # return 0 00:10:16.895 00:52:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.895 00:52:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:16.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.895 --rc genhtml_branch_coverage=1 00:10:16.895 --rc genhtml_function_coverage=1 00:10:16.895 --rc genhtml_legend=1 00:10:16.895 --rc geninfo_all_blocks=1 00:10:16.895 --rc geninfo_unexecuted_blocks=1 00:10:16.895 00:10:16.895 ' 00:10:16.895 00:52:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:16.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.895 --rc genhtml_branch_coverage=1 00:10:16.895 --rc genhtml_function_coverage=1 00:10:16.895 --rc genhtml_legend=1 00:10:16.895 --rc geninfo_all_blocks=1 00:10:16.895 --rc geninfo_unexecuted_blocks=1 00:10:16.895 00:10:16.895 ' 00:10:16.895 00:52:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:16.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.895 --rc genhtml_branch_coverage=1 00:10:16.895 --rc genhtml_function_coverage=1 00:10:16.895 --rc genhtml_legend=1 00:10:16.895 --rc geninfo_all_blocks=1 00:10:16.895 --rc geninfo_unexecuted_blocks=1 00:10:16.895 00:10:16.895 ' 00:10:16.895 00:52:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:16.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.895 --rc genhtml_branch_coverage=1 00:10:16.895 --rc genhtml_function_coverage=1 00:10:16.895 --rc genhtml_legend=1 00:10:16.895 --rc geninfo_all_blocks=1 00:10:16.895 --rc geninfo_unexecuted_blocks=1 00:10:16.895 00:10:16.895 ' 00:10:16.895 00:52:51 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:16.895 00:52:51 -- bdev/nbd_common.sh@6 -- # set -e 00:10:16.895 00:52:51 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:16.895 00:52:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:16.895 00:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.895 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:10:16.895 ************************************ 00:10:16.895 START TEST event_perf 00:10:16.895 ************************************ 00:10:16.895 00:52:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:16.895 Running I/O for 1 seconds...[2024-11-18 00:52:51.212622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:16.895 [2024-11-18 00:52:51.212894] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116124 ] 00:10:17.153 [2024-11-18 00:52:51.392431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.153 [2024-11-18 00:52:51.474876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.153 [2024-11-18 00:52:51.475019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.153 [2024-11-18 00:52:51.475337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.153 [2024-11-18 00:52:51.475204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.529 Running I/O for 1 seconds... 00:10:18.529 lcore 0: 191964 00:10:18.529 lcore 1: 191963 00:10:18.529 lcore 2: 191961 00:10:18.529 lcore 3: 191963 00:10:18.529 done. 00:10:18.529 00:10:18.529 real 0m1.491s 00:10:18.529 user 0m4.227s 00:10:18.529 sys 0m0.149s 00:10:18.529 00:52:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.529 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.529 ************************************ 00:10:18.529 END TEST event_perf 00:10:18.529 ************************************ 00:10:18.529 00:52:52 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:18.529 00:52:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:18.529 00:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.529 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.529 ************************************ 00:10:18.529 START TEST event_reactor 00:10:18.529 ************************************ 00:10:18.529 00:52:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:18.529 [2024-11-18 00:52:52.762984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:18.529 [2024-11-18 00:52:52.763305] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116165 ] 00:10:18.529 [2024-11-18 00:52:52.906435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.788 [2024-11-18 00:52:52.987831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.166 test_start 00:10:20.166 oneshot 00:10:20.166 tick 100 00:10:20.166 tick 100 00:10:20.166 tick 250 00:10:20.166 tick 100 00:10:20.166 tick 100 00:10:20.166 tick 100 00:10:20.166 tick 250 00:10:20.166 tick 500 00:10:20.166 tick 100 00:10:20.166 tick 100 00:10:20.166 tick 250 00:10:20.166 tick 100 00:10:20.166 tick 100 00:10:20.166 test_end 00:10:20.166 00:10:20.166 real 0m1.445s 00:10:20.166 user 0m1.239s 00:10:20.166 sys 0m0.105s 00:10:20.166 00:52:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.166 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:10:20.166 ************************************ 00:10:20.166 END TEST event_reactor 00:10:20.166 ************************************ 00:10:20.167 00:52:54 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:20.167 00:52:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:20.167 00:52:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.167 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 ************************************ 00:10:20.167 START TEST event_reactor_perf 00:10:20.167 ************************************ 00:10:20.167 00:52:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:20.167 [2024-11-18 00:52:54.271484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:20.167 [2024-11-18 00:52:54.271835] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116217 ] 00:10:20.167 [2024-11-18 00:52:54.415363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.167 [2024-11-18 00:52:54.486546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.542 test_start 00:10:21.542 test_end 00:10:21.542 Performance: 404214 events per second 00:10:21.542 00:10:21.542 real 0m1.433s 00:10:21.542 user 0m1.207s 00:10:21.542 sys 0m0.125s 00:10:21.542 00:52:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:21.542 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:10:21.542 ************************************ 00:10:21.542 END TEST event_reactor_perf 00:10:21.542 ************************************ 00:10:21.542 00:52:55 -- event/event.sh@49 -- # uname -s 00:10:21.542 00:52:55 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:21.542 00:52:55 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:21.542 00:52:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:21.542 00:52:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.542 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:10:21.542 ************************************ 00:10:21.542 START TEST event_scheduler 00:10:21.542 ************************************ 00:10:21.542 00:52:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:21.542 * Looking for test storage... 00:10:21.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:21.542 00:52:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:21.542 00:52:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:21.542 00:52:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:21.800 00:52:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:21.800 00:52:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:21.800 00:52:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:21.800 00:52:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:21.800 00:52:55 -- scripts/common.sh@335 -- # IFS=.-: 00:10:21.800 00:52:55 -- scripts/common.sh@335 -- # read -ra ver1 00:10:21.800 00:52:55 -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.800 00:52:55 -- scripts/common.sh@336 -- # read -ra ver2 00:10:21.800 00:52:55 -- scripts/common.sh@337 -- # local 'op=<' 00:10:21.800 00:52:55 -- scripts/common.sh@339 -- # ver1_l=2 00:10:21.800 00:52:55 -- scripts/common.sh@340 -- # ver2_l=1 00:10:21.800 00:52:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:21.800 00:52:55 -- scripts/common.sh@343 -- # case "$op" in 00:10:21.800 00:52:55 -- scripts/common.sh@344 -- # : 1 00:10:21.800 00:52:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:21.800 00:52:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.800 00:52:55 -- scripts/common.sh@364 -- # decimal 1 00:10:21.800 00:52:55 -- scripts/common.sh@352 -- # local d=1 00:10:21.800 00:52:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.800 00:52:55 -- scripts/common.sh@354 -- # echo 1 00:10:21.800 00:52:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:21.800 00:52:55 -- scripts/common.sh@365 -- # decimal 2 00:10:21.800 00:52:55 -- scripts/common.sh@352 -- # local d=2 00:10:21.800 00:52:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.800 00:52:55 -- scripts/common.sh@354 -- # echo 2 00:10:21.800 00:52:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:21.800 00:52:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:21.800 00:52:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:21.800 00:52:55 -- scripts/common.sh@367 -- # return 0 00:10:21.800 00:52:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.800 00:52:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.800 --rc genhtml_branch_coverage=1 00:10:21.800 --rc genhtml_function_coverage=1 00:10:21.800 --rc genhtml_legend=1 00:10:21.800 --rc geninfo_all_blocks=1 00:10:21.800 --rc geninfo_unexecuted_blocks=1 00:10:21.800 00:10:21.800 ' 00:10:21.800 00:52:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.800 --rc genhtml_branch_coverage=1 00:10:21.800 --rc genhtml_function_coverage=1 00:10:21.800 --rc genhtml_legend=1 00:10:21.800 --rc geninfo_all_blocks=1 00:10:21.800 --rc geninfo_unexecuted_blocks=1 00:10:21.800 00:10:21.800 ' 00:10:21.800 00:52:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.800 --rc genhtml_branch_coverage=1 00:10:21.800 --rc genhtml_function_coverage=1 00:10:21.800 --rc genhtml_legend=1 00:10:21.800 --rc geninfo_all_blocks=1 00:10:21.800 --rc geninfo_unexecuted_blocks=1 00:10:21.800 00:10:21.800 ' 00:10:21.800 00:52:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.800 --rc genhtml_branch_coverage=1 00:10:21.800 --rc genhtml_function_coverage=1 00:10:21.800 --rc genhtml_legend=1 00:10:21.800 --rc geninfo_all_blocks=1 00:10:21.800 --rc geninfo_unexecuted_blocks=1 00:10:21.800 00:10:21.800 ' 00:10:21.800 00:52:55 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:21.800 00:52:55 -- scheduler/scheduler.sh@35 -- # scheduler_pid=116289 00:10:21.800 00:52:55 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:21.800 00:52:55 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:21.800 00:52:55 -- scheduler/scheduler.sh@37 -- # waitforlisten 116289 00:10:21.800 00:52:55 -- common/autotest_common.sh@829 -- # '[' -z 116289 ']' 00:10:21.800 00:52:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.800 00:52:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.800 00:52:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.800 00:52:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.800 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:10:21.800 [2024-11-18 00:52:56.057922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:21.800 [2024-11-18 00:52:56.058222] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116289 ] 00:10:22.058 [2024-11-18 00:52:56.245297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.059 [2024-11-18 00:52:56.329919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.059 [2024-11-18 00:52:56.330095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.059 [2024-11-18 00:52:56.330386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.059 [2024-11-18 00:52:56.330253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.626 00:52:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.626 00:52:56 -- common/autotest_common.sh@862 -- # return 0 00:10:22.626 00:52:56 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:22.626 00:52:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.626 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:22.626 POWER: Env isn't set yet! 00:10:22.626 POWER: Attempting to initialise ACPI cpufreq power management... 00:10:22.626 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:22.626 POWER: Cannot set governor of lcore 0 to userspace 00:10:22.626 POWER: Attempting to initialise PSTAT power management... 00:10:22.626 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:22.626 POWER: Cannot set governor of lcore 0 to performance 00:10:22.626 POWER: Attempting to initialise CPPC power management... 00:10:22.626 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:22.626 POWER: Cannot set governor of lcore 0 to userspace 00:10:22.626 POWER: Attempting to initialise VM power management... 00:10:22.626 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:22.626 POWER: Unable to set Power Management Environment for lcore 0 00:10:22.626 [2024-11-18 00:52:56.989859] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:10:22.626 [2024-11-18 00:52:56.990063] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:10:22.626 [2024-11-18 00:52:56.990214] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:10:22.626 [2024-11-18 00:52:56.990328] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:22.626 [2024-11-18 00:52:56.990504] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:22.626 [2024-11-18 00:52:56.990634] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:22.626 00:52:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.626 00:52:56 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:22.626 00:52:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.626 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:22.884 [2024-11-18 00:52:57.115458] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:22.884 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.884 00:52:57 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:22.884 00:52:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:22.884 00:52:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.884 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.884 ************************************ 00:10:22.885 START TEST scheduler_create_thread 00:10:22.885 ************************************ 00:10:22.885 00:52:57 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 2 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 3 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 4 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 5 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 6 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 7 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 8 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 9 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 10 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.885 00:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.885 00:52:57 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:22.885 00:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.885 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:24.287 00:52:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.287 00:52:58 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:24.287 00:52:58 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:24.287 00:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.287 00:52:58 -- common/autotest_common.sh@10 -- # set +x 00:10:25.662 00:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.662 00:10:25.662 real 0m2.610s 00:10:25.662 user 0m0.009s 00:10:25.662 sys 0m0.008s 00:10:25.662 00:52:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:25.662 00:52:59 -- common/autotest_common.sh@10 -- # set +x 00:10:25.662 ************************************ 00:10:25.662 END TEST scheduler_create_thread 00:10:25.662 ************************************ 00:10:25.662 00:52:59 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:25.662 00:52:59 -- scheduler/scheduler.sh@46 -- # killprocess 116289 00:10:25.662 00:52:59 -- common/autotest_common.sh@936 -- # '[' -z 116289 ']' 00:10:25.662 00:52:59 -- common/autotest_common.sh@940 -- # kill -0 116289 00:10:25.662 00:52:59 -- common/autotest_common.sh@941 -- # uname 00:10:25.662 00:52:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.662 00:52:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116289 00:10:25.662 00:52:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:25.662 00:52:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:25.662 00:52:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116289' 00:10:25.662 killing process with pid 116289 00:10:25.662 00:52:59 -- common/autotest_common.sh@955 -- # kill 116289 00:10:25.662 00:52:59 -- common/autotest_common.sh@960 -- # wait 116289 00:10:25.920 [2024-11-18 00:53:00.220201] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:26.488 00:10:26.488 real 0m4.893s 00:10:26.488 user 0m8.592s 00:10:26.488 sys 0m0.610s 00:10:26.488 00:53:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:26.488 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:26.488 ************************************ 00:10:26.488 END TEST event_scheduler 00:10:26.488 ************************************ 00:10:26.488 00:53:00 -- event/event.sh@51 -- # modprobe -n nbd 00:10:26.488 00:53:00 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:26.488 00:53:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:26.488 00:53:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.488 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:26.488 ************************************ 00:10:26.488 START TEST app_repeat 00:10:26.488 ************************************ 00:10:26.488 00:53:00 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:10:26.488 00:53:00 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.488 00:53:00 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:26.488 00:53:00 -- event/event.sh@13 -- # local nbd_list 00:10:26.488 00:53:00 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:26.488 00:53:00 -- event/event.sh@14 -- # local bdev_list 00:10:26.488 00:53:00 -- event/event.sh@15 -- # local repeat_times=4 00:10:26.488 00:53:00 -- event/event.sh@17 -- # modprobe nbd 00:10:26.488 00:53:00 -- event/event.sh@19 -- # repeat_pid=116412 00:10:26.488 00:53:00 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:26.488 00:53:00 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:26.488 Process app_repeat pid: 116412 00:10:26.489 00:53:00 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 116412' 00:10:26.489 00:53:00 -- event/event.sh@23 -- # for i in {0..2} 00:10:26.489 00:53:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:26.489 spdk_app_start Round 0 00:10:26.489 00:53:00 -- event/event.sh@25 -- # waitforlisten 116412 /var/tmp/spdk-nbd.sock 00:10:26.489 00:53:00 -- common/autotest_common.sh@829 -- # '[' -z 116412 ']' 00:10:26.489 00:53:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:26.489 00:53:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:26.489 00:53:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:26.489 00:53:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.489 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:26.489 [2024-11-18 00:53:00.776857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:26.489 [2024-11-18 00:53:00.777119] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116412 ] 00:10:26.747 [2024-11-18 00:53:00.938480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:26.747 [2024-11-18 00:53:01.022250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.747 [2024-11-18 00:53:01.022251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.684 00:53:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.684 00:53:01 -- common/autotest_common.sh@862 -- # return 0 00:10:27.684 00:53:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:27.684 Malloc0 00:10:27.684 00:53:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:27.944 Malloc1 00:10:27.944 00:53:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@12 -- # local i 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.944 00:53:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:28.204 /dev/nbd0 00:10:28.204 00:53:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:28.204 00:53:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:28.204 00:53:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:28.204 00:53:02 -- common/autotest_common.sh@867 -- # local i 00:10:28.204 00:53:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:28.204 00:53:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:28.204 00:53:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:28.204 00:53:02 -- common/autotest_common.sh@871 -- # break 00:10:28.204 00:53:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:28.204 00:53:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:28.204 00:53:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:28.204 1+0 records in 00:10:28.204 1+0 records out 00:10:28.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274911 s, 14.9 MB/s 00:10:28.204 00:53:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.204 00:53:02 -- common/autotest_common.sh@884 -- # size=4096 00:10:28.204 00:53:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.204 00:53:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:28.204 00:53:02 -- common/autotest_common.sh@887 -- # return 0 00:10:28.204 00:53:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.204 00:53:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.204 00:53:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:28.463 /dev/nbd1 00:10:28.463 00:53:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:28.463 00:53:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:28.463 00:53:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:28.463 00:53:02 -- common/autotest_common.sh@867 -- # local i 00:10:28.463 00:53:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:28.463 00:53:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:28.463 00:53:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:28.463 00:53:02 -- common/autotest_common.sh@871 -- # break 00:10:28.463 00:53:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:28.463 00:53:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:28.463 00:53:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:28.463 1+0 records in 00:10:28.463 1+0 records out 00:10:28.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033069 s, 12.4 MB/s 00:10:28.463 00:53:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.463 00:53:02 -- common/autotest_common.sh@884 -- # size=4096 00:10:28.463 00:53:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.463 00:53:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:28.463 00:53:02 -- common/autotest_common.sh@887 -- # return 0 00:10:28.463 00:53:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.463 00:53:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.463 00:53:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:28.463 00:53:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.463 00:53:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:28.722 00:53:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:28.722 { 00:10:28.722 "nbd_device": "/dev/nbd0", 00:10:28.722 "bdev_name": "Malloc0" 00:10:28.722 }, 00:10:28.722 { 00:10:28.722 "nbd_device": "/dev/nbd1", 00:10:28.722 "bdev_name": "Malloc1" 00:10:28.722 } 00:10:28.722 ]' 00:10:28.722 00:53:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:28.722 00:53:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:28.722 { 00:10:28.723 "nbd_device": "/dev/nbd0", 00:10:28.723 "bdev_name": "Malloc0" 00:10:28.723 }, 00:10:28.723 { 00:10:28.723 "nbd_device": "/dev/nbd1", 00:10:28.723 "bdev_name": "Malloc1" 00:10:28.723 } 00:10:28.723 ]' 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:28.723 /dev/nbd1' 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:28.723 /dev/nbd1' 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@65 -- # count=2 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@95 -- # count=2 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:28.723 00:53:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:28.982 256+0 records in 00:10:28.982 256+0 records out 00:10:28.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00929071 s, 113 MB/s 00:10:28.982 00:53:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.982 00:53:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:28.982 256+0 records in 00:10:28.982 256+0 records out 00:10:28.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268562 s, 39.0 MB/s 00:10:28.982 00:53:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.982 00:53:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:28.982 256+0 records in 00:10:28.982 256+0 records out 00:10:28.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329099 s, 31.9 MB/s 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@51 -- # local i 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.983 00:53:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@41 -- # break 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@41 -- # break 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.242 00:53:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@65 -- # true 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@65 -- # count=0 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@104 -- # count=0 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:29.502 00:53:03 -- bdev/nbd_common.sh@109 -- # return 0 00:10:29.502 00:53:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:29.761 00:53:04 -- event/event.sh@35 -- # sleep 3 00:10:30.330 [2024-11-18 00:53:04.432953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:30.330 [2024-11-18 00:53:04.515043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.330 [2024-11-18 00:53:04.515047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.330 [2024-11-18 00:53:04.594785] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:30.330 [2024-11-18 00:53:04.594957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:32.865 00:53:07 -- event/event.sh@23 -- # for i in {0..2} 00:10:32.865 spdk_app_start Round 1 00:10:32.865 00:53:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:32.865 00:53:07 -- event/event.sh@25 -- # waitforlisten 116412 /var/tmp/spdk-nbd.sock 00:10:32.865 00:53:07 -- common/autotest_common.sh@829 -- # '[' -z 116412 ']' 00:10:32.865 00:53:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:32.865 00:53:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:32.865 00:53:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:32.865 00:53:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.865 00:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:33.124 00:53:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.124 00:53:07 -- common/autotest_common.sh@862 -- # return 0 00:10:33.124 00:53:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:33.382 Malloc0 00:10:33.382 00:53:07 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:33.382 Malloc1 00:10:33.641 00:53:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@12 -- # local i 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.641 00:53:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:33.901 /dev/nbd0 00:10:33.901 00:53:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:33.901 00:53:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:33.901 00:53:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:33.901 00:53:08 -- common/autotest_common.sh@867 -- # local i 00:10:33.901 00:53:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:33.901 00:53:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:33.901 00:53:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:33.901 00:53:08 -- common/autotest_common.sh@871 -- # break 00:10:33.901 00:53:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:33.901 00:53:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:33.901 00:53:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:33.901 1+0 records in 00:10:33.901 1+0 records out 00:10:33.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026357 s, 15.5 MB/s 00:10:33.901 00:53:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:33.901 00:53:08 -- common/autotest_common.sh@884 -- # size=4096 00:10:33.901 00:53:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:33.901 00:53:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:33.901 00:53:08 -- common/autotest_common.sh@887 -- # return 0 00:10:33.901 00:53:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.901 00:53:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.901 00:53:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:34.160 /dev/nbd1 00:10:34.160 00:53:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:34.160 00:53:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:34.160 00:53:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:34.160 00:53:08 -- common/autotest_common.sh@867 -- # local i 00:10:34.160 00:53:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:34.160 00:53:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:34.160 00:53:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:34.160 00:53:08 -- common/autotest_common.sh@871 -- # break 00:10:34.160 00:53:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:34.160 00:53:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:34.160 00:53:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:34.160 1+0 records in 00:10:34.160 1+0 records out 00:10:34.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286915 s, 14.3 MB/s 00:10:34.160 00:53:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:34.160 00:53:08 -- common/autotest_common.sh@884 -- # size=4096 00:10:34.160 00:53:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:34.160 00:53:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:34.160 00:53:08 -- common/autotest_common.sh@887 -- # return 0 00:10:34.160 00:53:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.160 00:53:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.160 00:53:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:34.160 00:53:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.160 00:53:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:34.418 00:53:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:34.418 { 00:10:34.418 "nbd_device": "/dev/nbd0", 00:10:34.418 "bdev_name": "Malloc0" 00:10:34.418 }, 00:10:34.418 { 00:10:34.418 "nbd_device": "/dev/nbd1", 00:10:34.418 "bdev_name": "Malloc1" 00:10:34.418 } 00:10:34.418 ]' 00:10:34.418 00:53:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:34.418 { 00:10:34.419 "nbd_device": "/dev/nbd0", 00:10:34.419 "bdev_name": "Malloc0" 00:10:34.419 }, 00:10:34.419 { 00:10:34.419 "nbd_device": "/dev/nbd1", 00:10:34.419 "bdev_name": "Malloc1" 00:10:34.419 } 00:10:34.419 ]' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:34.419 /dev/nbd1' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:34.419 /dev/nbd1' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@65 -- # count=2 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@95 -- # count=2 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:34.419 256+0 records in 00:10:34.419 256+0 records out 00:10:34.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580717 s, 181 MB/s 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:34.419 256+0 records in 00:10:34.419 256+0 records out 00:10:34.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307103 s, 34.1 MB/s 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:34.419 256+0 records in 00:10:34.419 256+0 records out 00:10:34.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336566 s, 31.2 MB/s 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.419 00:53:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@51 -- # local i 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.678 00:53:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@41 -- # break 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.937 00:53:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:35.205 00:53:09 -- bdev/nbd_common.sh@41 -- # break 00:10:35.205 00:53:09 -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.205 00:53:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.205 00:53:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.205 00:53:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@65 -- # true 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@65 -- # count=0 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@104 -- # count=0 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:35.485 00:53:09 -- bdev/nbd_common.sh@109 -- # return 0 00:10:35.485 00:53:09 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:35.754 00:53:09 -- event/event.sh@35 -- # sleep 3 00:10:36.014 [2024-11-18 00:53:10.311999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.014 [2024-11-18 00:53:10.394837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.014 [2024-11-18 00:53:10.394836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.273 [2024-11-18 00:53:10.476473] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:36.273 [2024-11-18 00:53:10.476585] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:38.807 00:53:12 -- event/event.sh@23 -- # for i in {0..2} 00:10:38.807 spdk_app_start Round 2 00:10:38.807 00:53:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:38.807 00:53:12 -- event/event.sh@25 -- # waitforlisten 116412 /var/tmp/spdk-nbd.sock 00:10:38.807 00:53:12 -- common/autotest_common.sh@829 -- # '[' -z 116412 ']' 00:10:38.807 00:53:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:38.807 00:53:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:38.807 00:53:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:38.807 00:53:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.807 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:10:39.066 00:53:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.066 00:53:13 -- common/autotest_common.sh@862 -- # return 0 00:10:39.066 00:53:13 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:39.325 Malloc0 00:10:39.325 00:53:13 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:39.585 Malloc1 00:10:39.585 00:53:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@12 -- # local i 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:39.585 00:53:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:39.845 /dev/nbd0 00:10:39.845 00:53:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:39.845 00:53:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:39.845 00:53:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:39.845 00:53:14 -- common/autotest_common.sh@867 -- # local i 00:10:39.845 00:53:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:39.845 00:53:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:39.845 00:53:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:39.845 00:53:14 -- common/autotest_common.sh@871 -- # break 00:10:39.845 00:53:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:39.845 00:53:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:39.845 00:53:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:39.845 1+0 records in 00:10:39.845 1+0 records out 00:10:39.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295038 s, 13.9 MB/s 00:10:39.845 00:53:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:39.845 00:53:14 -- common/autotest_common.sh@884 -- # size=4096 00:10:39.845 00:53:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:39.845 00:53:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:39.845 00:53:14 -- common/autotest_common.sh@887 -- # return 0 00:10:39.845 00:53:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:39.845 00:53:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:39.845 00:53:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:40.104 /dev/nbd1 00:10:40.104 00:53:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:40.104 00:53:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:40.104 00:53:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:40.104 00:53:14 -- common/autotest_common.sh@867 -- # local i 00:10:40.104 00:53:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:40.104 00:53:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:40.104 00:53:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:40.104 00:53:14 -- common/autotest_common.sh@871 -- # break 00:10:40.104 00:53:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:40.104 00:53:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:40.104 00:53:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:40.104 1+0 records in 00:10:40.104 1+0 records out 00:10:40.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351202 s, 11.7 MB/s 00:10:40.104 00:53:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.104 00:53:14 -- common/autotest_common.sh@884 -- # size=4096 00:10:40.104 00:53:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.104 00:53:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:40.104 00:53:14 -- common/autotest_common.sh@887 -- # return 0 00:10:40.104 00:53:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:40.104 00:53:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:40.104 00:53:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:40.104 00:53:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.104 00:53:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.362 00:53:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:40.362 { 00:10:40.362 "nbd_device": "/dev/nbd0", 00:10:40.362 "bdev_name": "Malloc0" 00:10:40.362 }, 00:10:40.362 { 00:10:40.362 "nbd_device": "/dev/nbd1", 00:10:40.362 "bdev_name": "Malloc1" 00:10:40.362 } 00:10:40.362 ]' 00:10:40.362 00:53:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:40.362 { 00:10:40.362 "nbd_device": "/dev/nbd0", 00:10:40.362 "bdev_name": "Malloc0" 00:10:40.362 }, 00:10:40.362 { 00:10:40.362 "nbd_device": "/dev/nbd1", 00:10:40.362 "bdev_name": "Malloc1" 00:10:40.362 } 00:10:40.362 ]' 00:10:40.362 00:53:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:40.621 00:53:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:40.621 /dev/nbd1' 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:40.622 /dev/nbd1' 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@65 -- # count=2 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@95 -- # count=2 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:40.622 256+0 records in 00:10:40.622 256+0 records out 00:10:40.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679845 s, 154 MB/s 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:40.622 256+0 records in 00:10:40.622 256+0 records out 00:10:40.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218793 s, 47.9 MB/s 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:40.622 256+0 records in 00:10:40.622 256+0 records out 00:10:40.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306596 s, 34.2 MB/s 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@51 -- # local i 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.622 00:53:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@41 -- # break 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.881 00:53:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@41 -- # break 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.140 00:53:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@65 -- # true 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@65 -- # count=0 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@104 -- # count=0 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:41.399 00:53:15 -- bdev/nbd_common.sh@109 -- # return 0 00:10:41.399 00:53:15 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:41.658 00:53:16 -- event/event.sh@35 -- # sleep 3 00:10:42.225 [2024-11-18 00:53:16.366615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.225 [2024-11-18 00:53:16.453082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.225 [2024-11-18 00:53:16.453082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.225 [2024-11-18 00:53:16.535243] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:42.225 [2024-11-18 00:53:16.535372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:44.756 00:53:19 -- event/event.sh@38 -- # waitforlisten 116412 /var/tmp/spdk-nbd.sock 00:10:44.756 00:53:19 -- common/autotest_common.sh@829 -- # '[' -z 116412 ']' 00:10:44.756 00:53:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:44.756 00:53:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:44.756 00:53:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:44.756 00:53:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.756 00:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:45.014 00:53:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.014 00:53:19 -- common/autotest_common.sh@862 -- # return 0 00:10:45.014 00:53:19 -- event/event.sh@39 -- # killprocess 116412 00:10:45.014 00:53:19 -- common/autotest_common.sh@936 -- # '[' -z 116412 ']' 00:10:45.014 00:53:19 -- common/autotest_common.sh@940 -- # kill -0 116412 00:10:45.014 00:53:19 -- common/autotest_common.sh@941 -- # uname 00:10:45.014 00:53:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:45.014 00:53:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116412 00:10:45.014 00:53:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:45.014 00:53:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:45.014 killing process with pid 116412 00:10:45.014 00:53:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116412' 00:10:45.014 00:53:19 -- common/autotest_common.sh@955 -- # kill 116412 00:10:45.014 00:53:19 -- common/autotest_common.sh@960 -- # wait 116412 00:10:45.274 spdk_app_start is called in Round 0. 00:10:45.274 Shutdown signal received, stop current app iteration 00:10:45.274 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:10:45.274 spdk_app_start is called in Round 1. 00:10:45.274 Shutdown signal received, stop current app iteration 00:10:45.274 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:10:45.274 spdk_app_start is called in Round 2. 00:10:45.274 Shutdown signal received, stop current app iteration 00:10:45.274 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:10:45.274 spdk_app_start is called in Round 3. 00:10:45.274 Shutdown signal received, stop current app iteration 00:10:45.533 00:53:19 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:45.533 00:53:19 -- event/event.sh@42 -- # return 0 00:10:45.533 00:10:45.533 real 0m18.952s 00:10:45.533 user 0m40.937s 00:10:45.533 sys 0m3.826s 00:10:45.533 00:53:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:45.533 00:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:45.533 ************************************ 00:10:45.533 END TEST app_repeat 00:10:45.533 ************************************ 00:10:45.533 00:53:19 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:45.533 00:53:19 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:45.533 00:53:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:45.533 00:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.533 00:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:45.533 ************************************ 00:10:45.533 START TEST cpu_locks 00:10:45.533 ************************************ 00:10:45.533 00:53:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:45.533 * Looking for test storage... 00:10:45.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:45.533 00:53:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:45.533 00:53:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:45.533 00:53:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:45.533 00:53:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:45.533 00:53:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:45.533 00:53:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:45.533 00:53:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:45.533 00:53:19 -- scripts/common.sh@335 -- # IFS=.-: 00:10:45.792 00:53:19 -- scripts/common.sh@335 -- # read -ra ver1 00:10:45.792 00:53:19 -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.792 00:53:19 -- scripts/common.sh@336 -- # read -ra ver2 00:10:45.792 00:53:19 -- scripts/common.sh@337 -- # local 'op=<' 00:10:45.792 00:53:19 -- scripts/common.sh@339 -- # ver1_l=2 00:10:45.792 00:53:19 -- scripts/common.sh@340 -- # ver2_l=1 00:10:45.792 00:53:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:45.792 00:53:19 -- scripts/common.sh@343 -- # case "$op" in 00:10:45.792 00:53:19 -- scripts/common.sh@344 -- # : 1 00:10:45.792 00:53:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:45.792 00:53:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.792 00:53:19 -- scripts/common.sh@364 -- # decimal 1 00:10:45.792 00:53:19 -- scripts/common.sh@352 -- # local d=1 00:10:45.792 00:53:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.792 00:53:19 -- scripts/common.sh@354 -- # echo 1 00:10:45.792 00:53:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:45.792 00:53:19 -- scripts/common.sh@365 -- # decimal 2 00:10:45.792 00:53:19 -- scripts/common.sh@352 -- # local d=2 00:10:45.792 00:53:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.792 00:53:19 -- scripts/common.sh@354 -- # echo 2 00:10:45.792 00:53:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:45.792 00:53:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:45.792 00:53:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:45.792 00:53:19 -- scripts/common.sh@367 -- # return 0 00:10:45.792 00:53:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.792 00:53:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:45.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.792 --rc genhtml_branch_coverage=1 00:10:45.792 --rc genhtml_function_coverage=1 00:10:45.792 --rc genhtml_legend=1 00:10:45.792 --rc geninfo_all_blocks=1 00:10:45.792 --rc geninfo_unexecuted_blocks=1 00:10:45.792 00:10:45.792 ' 00:10:45.792 00:53:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:45.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.793 --rc genhtml_branch_coverage=1 00:10:45.793 --rc genhtml_function_coverage=1 00:10:45.793 --rc genhtml_legend=1 00:10:45.793 --rc geninfo_all_blocks=1 00:10:45.793 --rc geninfo_unexecuted_blocks=1 00:10:45.793 00:10:45.793 ' 00:10:45.793 00:53:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:45.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.793 --rc genhtml_branch_coverage=1 00:10:45.793 --rc genhtml_function_coverage=1 00:10:45.793 --rc genhtml_legend=1 00:10:45.793 --rc geninfo_all_blocks=1 00:10:45.793 --rc geninfo_unexecuted_blocks=1 00:10:45.793 00:10:45.793 ' 00:10:45.793 00:53:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:45.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.793 --rc genhtml_branch_coverage=1 00:10:45.793 --rc genhtml_function_coverage=1 00:10:45.793 --rc genhtml_legend=1 00:10:45.793 --rc geninfo_all_blocks=1 00:10:45.793 --rc geninfo_unexecuted_blocks=1 00:10:45.793 00:10:45.793 ' 00:10:45.793 00:53:19 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:45.793 00:53:19 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:45.793 00:53:19 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:45.793 00:53:19 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:45.793 00:53:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:45.793 00:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.793 00:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:45.793 ************************************ 00:10:45.793 START TEST default_locks 00:10:45.793 ************************************ 00:10:45.793 00:53:19 -- common/autotest_common.sh@1114 -- # default_locks 00:10:45.793 00:53:19 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=116937 00:10:45.793 00:53:19 -- event/cpu_locks.sh@47 -- # waitforlisten 116937 00:10:45.793 00:53:19 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:45.793 00:53:19 -- common/autotest_common.sh@829 -- # '[' -z 116937 ']' 00:10:45.793 00:53:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.793 00:53:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.793 00:53:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.793 00:53:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.793 00:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:45.793 [2024-11-18 00:53:20.043814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:45.793 [2024-11-18 00:53:20.044082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116937 ] 00:10:45.793 [2024-11-18 00:53:20.193697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.051 [2024-11-18 00:53:20.285754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:46.051 [2024-11-18 00:53:20.286267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.619 00:53:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.619 00:53:20 -- common/autotest_common.sh@862 -- # return 0 00:10:46.619 00:53:20 -- event/cpu_locks.sh@49 -- # locks_exist 116937 00:10:46.619 00:53:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:46.619 00:53:20 -- event/cpu_locks.sh@22 -- # lslocks -p 116937 00:10:47.187 00:53:21 -- event/cpu_locks.sh@50 -- # killprocess 116937 00:10:47.187 00:53:21 -- common/autotest_common.sh@936 -- # '[' -z 116937 ']' 00:10:47.187 00:53:21 -- common/autotest_common.sh@940 -- # kill -0 116937 00:10:47.187 00:53:21 -- common/autotest_common.sh@941 -- # uname 00:10:47.187 00:53:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.187 00:53:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116937 00:10:47.187 00:53:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:47.187 00:53:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:47.187 00:53:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116937' 00:10:47.187 killing process with pid 116937 00:10:47.187 00:53:21 -- common/autotest_common.sh@955 -- # kill 116937 00:10:47.187 00:53:21 -- common/autotest_common.sh@960 -- # wait 116937 00:10:47.755 00:53:22 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 116937 00:10:47.755 00:53:22 -- common/autotest_common.sh@650 -- # local es=0 00:10:47.755 00:53:22 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 116937 00:10:47.755 00:53:22 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:47.755 00:53:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.755 00:53:22 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:47.755 00:53:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.755 00:53:22 -- common/autotest_common.sh@653 -- # waitforlisten 116937 00:10:47.755 00:53:22 -- common/autotest_common.sh@829 -- # '[' -z 116937 ']' 00:10:47.755 00:53:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.755 00:53:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.755 00:53:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.755 00:53:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.755 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:47.755 ERROR: process (pid: 116937) is no longer running 00:10:47.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (116937) - No such process 00:10:47.755 00:53:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.755 00:53:22 -- common/autotest_common.sh@862 -- # return 1 00:10:47.755 00:53:22 -- common/autotest_common.sh@653 -- # es=1 00:10:47.755 00:53:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:47.755 00:53:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:47.755 00:53:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:47.755 00:53:22 -- event/cpu_locks.sh@54 -- # no_locks 00:10:47.755 00:53:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:47.755 00:53:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:47.755 00:53:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:47.755 00:10:47.755 real 0m2.082s 00:10:47.755 user 0m2.011s 00:10:47.755 sys 0m0.783s 00:10:47.755 00:53:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.755 ************************************ 00:10:47.755 END TEST default_locks 00:10:47.755 ************************************ 00:10:47.755 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:47.755 00:53:22 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:47.755 00:53:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.755 00:53:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.755 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:47.755 ************************************ 00:10:47.755 START TEST default_locks_via_rpc 00:10:47.755 ************************************ 00:10:47.755 00:53:22 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:10:47.755 00:53:22 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=116998 00:10:47.755 00:53:22 -- event/cpu_locks.sh@63 -- # waitforlisten 116998 00:10:47.755 00:53:22 -- common/autotest_common.sh@829 -- # '[' -z 116998 ']' 00:10:47.755 00:53:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.755 00:53:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.755 00:53:22 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:47.755 00:53:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.755 00:53:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.755 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:48.014 [2024-11-18 00:53:22.175793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:48.014 [2024-11-18 00:53:22.176263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116998 ] 00:10:48.014 [2024-11-18 00:53:22.318539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.014 [2024-11-18 00:53:22.406919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:48.014 [2024-11-18 00:53:22.407419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.950 00:53:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.950 00:53:23 -- common/autotest_common.sh@862 -- # return 0 00:10:48.950 00:53:23 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:48.950 00:53:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.950 00:53:23 -- common/autotest_common.sh@10 -- # set +x 00:10:48.950 00:53:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.950 00:53:23 -- event/cpu_locks.sh@67 -- # no_locks 00:10:48.950 00:53:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:48.950 00:53:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:48.950 00:53:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:48.950 00:53:23 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:48.950 00:53:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.950 00:53:23 -- common/autotest_common.sh@10 -- # set +x 00:10:48.950 00:53:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.950 00:53:23 -- event/cpu_locks.sh@71 -- # locks_exist 116998 00:10:48.950 00:53:23 -- event/cpu_locks.sh@22 -- # lslocks -p 116998 00:10:48.950 00:53:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:49.210 00:53:23 -- event/cpu_locks.sh@73 -- # killprocess 116998 00:10:49.210 00:53:23 -- common/autotest_common.sh@936 -- # '[' -z 116998 ']' 00:10:49.210 00:53:23 -- common/autotest_common.sh@940 -- # kill -0 116998 00:10:49.210 00:53:23 -- common/autotest_common.sh@941 -- # uname 00:10:49.210 00:53:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:49.210 00:53:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116998 00:10:49.210 00:53:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:49.210 00:53:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:49.210 00:53:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116998' 00:10:49.210 killing process with pid 116998 00:10:49.210 00:53:23 -- common/autotest_common.sh@955 -- # kill 116998 00:10:49.210 00:53:23 -- common/autotest_common.sh@960 -- # wait 116998 00:10:50.146 00:10:50.146 real 0m2.117s 00:10:50.146 user 0m2.115s 00:10:50.146 sys 0m0.763s 00:10:50.146 00:53:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.146 00:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:50.146 ************************************ 00:10:50.146 END TEST default_locks_via_rpc 00:10:50.146 ************************************ 00:10:50.146 00:53:24 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:50.146 00:53:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:50.146 00:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.146 00:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:50.146 ************************************ 00:10:50.146 START TEST non_locking_app_on_locked_coremask 00:10:50.146 ************************************ 00:10:50.146 00:53:24 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:10:50.146 00:53:24 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=117061 00:10:50.146 00:53:24 -- event/cpu_locks.sh@81 -- # waitforlisten 117061 /var/tmp/spdk.sock 00:10:50.146 00:53:24 -- common/autotest_common.sh@829 -- # '[' -z 117061 ']' 00:10:50.146 00:53:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.146 00:53:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.146 00:53:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.146 00:53:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.146 00:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:50.146 00:53:24 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:50.146 [2024-11-18 00:53:24.351602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:50.146 [2024-11-18 00:53:24.351996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117061 ] 00:10:50.146 [2024-11-18 00:53:24.498239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.405 [2024-11-18 00:53:24.587379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:50.405 [2024-11-18 00:53:24.587851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.972 00:53:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.972 00:53:25 -- common/autotest_common.sh@862 -- # return 0 00:10:50.972 00:53:25 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=117076 00:10:50.972 00:53:25 -- event/cpu_locks.sh@85 -- # waitforlisten 117076 /var/tmp/spdk2.sock 00:10:50.972 00:53:25 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:50.972 00:53:25 -- common/autotest_common.sh@829 -- # '[' -z 117076 ']' 00:10:50.972 00:53:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:50.972 00:53:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.972 00:53:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:50.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:50.972 00:53:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.972 00:53:25 -- common/autotest_common.sh@10 -- # set +x 00:10:50.972 [2024-11-18 00:53:25.331094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:50.972 [2024-11-18 00:53:25.331330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117076 ] 00:10:51.230 [2024-11-18 00:53:25.471220] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:51.230 [2024-11-18 00:53:25.486325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.489 [2024-11-18 00:53:25.650886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:51.489 [2024-11-18 00:53:25.666454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.056 00:53:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.056 00:53:26 -- common/autotest_common.sh@862 -- # return 0 00:10:52.056 00:53:26 -- event/cpu_locks.sh@87 -- # locks_exist 117061 00:10:52.056 00:53:26 -- event/cpu_locks.sh@22 -- # lslocks -p 117061 00:10:52.057 00:53:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:52.624 00:53:26 -- event/cpu_locks.sh@89 -- # killprocess 117061 00:10:52.624 00:53:26 -- common/autotest_common.sh@936 -- # '[' -z 117061 ']' 00:10:52.624 00:53:26 -- common/autotest_common.sh@940 -- # kill -0 117061 00:10:52.624 00:53:26 -- common/autotest_common.sh@941 -- # uname 00:10:52.624 00:53:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:52.624 00:53:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117061 00:10:52.624 00:53:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:52.624 00:53:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:52.624 00:53:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117061' 00:10:52.624 killing process with pid 117061 00:10:52.624 00:53:26 -- common/autotest_common.sh@955 -- # kill 117061 00:10:52.624 00:53:26 -- common/autotest_common.sh@960 -- # wait 117061 00:10:53.999 00:53:28 -- event/cpu_locks.sh@90 -- # killprocess 117076 00:10:53.999 00:53:28 -- common/autotest_common.sh@936 -- # '[' -z 117076 ']' 00:10:53.999 00:53:28 -- common/autotest_common.sh@940 -- # kill -0 117076 00:10:53.999 00:53:28 -- common/autotest_common.sh@941 -- # uname 00:10:53.999 00:53:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.999 00:53:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117076 00:10:53.999 00:53:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:53.999 00:53:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:53.999 00:53:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117076' 00:10:53.999 killing process with pid 117076 00:10:53.999 00:53:28 -- common/autotest_common.sh@955 -- # kill 117076 00:10:53.999 00:53:28 -- common/autotest_common.sh@960 -- # wait 117076 00:10:54.999 00:10:54.999 real 0m4.751s 00:10:54.999 user 0m4.845s 00:10:54.999 sys 0m1.504s 00:10:54.999 00:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:54.999 00:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:54.999 ************************************ 00:10:54.999 END TEST non_locking_app_on_locked_coremask 00:10:54.999 ************************************ 00:10:54.999 00:53:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:54.999 00:53:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:54.999 00:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.999 00:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:54.999 ************************************ 00:10:54.999 START TEST locking_app_on_unlocked_coremask 00:10:54.999 ************************************ 00:10:54.999 00:53:29 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:10:54.999 00:53:29 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:54.999 00:53:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=117155 00:10:54.999 00:53:29 -- event/cpu_locks.sh@99 -- # waitforlisten 117155 /var/tmp/spdk.sock 00:10:54.999 00:53:29 -- common/autotest_common.sh@829 -- # '[' -z 117155 ']' 00:10:54.999 00:53:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.999 00:53:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.999 00:53:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.999 00:53:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.999 00:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:54.999 [2024-11-18 00:53:29.166335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:54.999 [2024-11-18 00:53:29.166802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117155 ] 00:10:54.999 [2024-11-18 00:53:29.311124] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:54.999 [2024-11-18 00:53:29.311454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.258 [2024-11-18 00:53:29.403493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:55.258 [2024-11-18 00:53:29.404013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.826 00:53:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.826 00:53:30 -- common/autotest_common.sh@862 -- # return 0 00:10:55.826 00:53:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=117176 00:10:55.826 00:53:30 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:55.826 00:53:30 -- event/cpu_locks.sh@103 -- # waitforlisten 117176 /var/tmp/spdk2.sock 00:10:55.826 00:53:30 -- common/autotest_common.sh@829 -- # '[' -z 117176 ']' 00:10:55.826 00:53:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:55.826 00:53:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:55.826 00:53:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:55.826 00:53:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.826 00:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:55.826 [2024-11-18 00:53:30.207530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:55.826 [2024-11-18 00:53:30.207773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117176 ] 00:10:56.085 [2024-11-18 00:53:30.371076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.343 [2024-11-18 00:53:30.541668] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:56.343 [2024-11-18 00:53:30.554373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.721 00:53:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.721 00:53:31 -- common/autotest_common.sh@862 -- # return 0 00:10:57.721 00:53:31 -- event/cpu_locks.sh@105 -- # locks_exist 117176 00:10:57.721 00:53:31 -- event/cpu_locks.sh@22 -- # lslocks -p 117176 00:10:57.721 00:53:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:57.980 00:53:32 -- event/cpu_locks.sh@107 -- # killprocess 117155 00:10:57.980 00:53:32 -- common/autotest_common.sh@936 -- # '[' -z 117155 ']' 00:10:57.980 00:53:32 -- common/autotest_common.sh@940 -- # kill -0 117155 00:10:57.980 00:53:32 -- common/autotest_common.sh@941 -- # uname 00:10:57.980 00:53:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:57.980 00:53:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117155 00:10:57.980 00:53:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:57.980 00:53:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:57.980 00:53:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117155' 00:10:57.980 killing process with pid 117155 00:10:57.980 00:53:32 -- common/autotest_common.sh@955 -- # kill 117155 00:10:57.980 00:53:32 -- common/autotest_common.sh@960 -- # wait 117155 00:10:59.358 00:53:33 -- event/cpu_locks.sh@108 -- # killprocess 117176 00:10:59.358 00:53:33 -- common/autotest_common.sh@936 -- # '[' -z 117176 ']' 00:10:59.358 00:53:33 -- common/autotest_common.sh@940 -- # kill -0 117176 00:10:59.358 00:53:33 -- common/autotest_common.sh@941 -- # uname 00:10:59.358 00:53:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:59.358 00:53:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117176 00:10:59.358 00:53:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:59.358 00:53:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:59.358 00:53:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117176' 00:10:59.358 killing process with pid 117176 00:10:59.358 00:53:33 -- common/autotest_common.sh@955 -- # kill 117176 00:10:59.358 00:53:33 -- common/autotest_common.sh@960 -- # wait 117176 00:11:00.296 00:11:00.296 real 0m5.234s 00:11:00.296 user 0m5.435s 00:11:00.296 sys 0m1.480s 00:11:00.296 00:53:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:00.296 ************************************ 00:11:00.296 END TEST locking_app_on_unlocked_coremask 00:11:00.296 00:53:34 -- common/autotest_common.sh@10 -- # set +x 00:11:00.296 ************************************ 00:11:00.296 00:53:34 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:00.296 00:53:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:00.296 00:53:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.296 00:53:34 -- common/autotest_common.sh@10 -- # set +x 00:11:00.296 ************************************ 00:11:00.296 START TEST locking_app_on_locked_coremask 00:11:00.296 ************************************ 00:11:00.296 00:53:34 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:11:00.296 00:53:34 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117273 00:11:00.296 00:53:34 -- event/cpu_locks.sh@116 -- # waitforlisten 117273 /var/tmp/spdk.sock 00:11:00.296 00:53:34 -- common/autotest_common.sh@829 -- # '[' -z 117273 ']' 00:11:00.296 00:53:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.296 00:53:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.296 00:53:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.296 00:53:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.296 00:53:34 -- common/autotest_common.sh@10 -- # set +x 00:11:00.296 00:53:34 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:00.296 [2024-11-18 00:53:34.462006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:00.296 [2024-11-18 00:53:34.462638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117273 ] 00:11:00.296 [2024-11-18 00:53:34.609445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.296 [2024-11-18 00:53:34.694552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:00.296 [2024-11-18 00:53:34.695038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.234 00:53:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.234 00:53:35 -- common/autotest_common.sh@862 -- # return 0 00:11:01.234 00:53:35 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117294 00:11:01.234 00:53:35 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:01.234 00:53:35 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117294 /var/tmp/spdk2.sock 00:11:01.234 00:53:35 -- common/autotest_common.sh@650 -- # local es=0 00:11:01.234 00:53:35 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 117294 /var/tmp/spdk2.sock 00:11:01.234 00:53:35 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:01.234 00:53:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.234 00:53:35 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:01.234 00:53:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.234 00:53:35 -- common/autotest_common.sh@653 -- # waitforlisten 117294 /var/tmp/spdk2.sock 00:11:01.234 00:53:35 -- common/autotest_common.sh@829 -- # '[' -z 117294 ']' 00:11:01.234 00:53:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:01.234 00:53:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.234 00:53:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:01.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:01.234 00:53:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.234 00:53:35 -- common/autotest_common.sh@10 -- # set +x 00:11:01.234 [2024-11-18 00:53:35.511494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:01.234 [2024-11-18 00:53:35.511732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117294 ] 00:11:01.492 [2024-11-18 00:53:35.662785] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117273 has claimed it. 00:11:01.492 [2024-11-18 00:53:35.678233] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:02.059 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (117294) - No such process 00:11:02.059 ERROR: process (pid: 117294) is no longer running 00:11:02.060 00:53:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.060 00:53:36 -- common/autotest_common.sh@862 -- # return 1 00:11:02.060 00:53:36 -- common/autotest_common.sh@653 -- # es=1 00:11:02.060 00:53:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.060 00:53:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.060 00:53:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.060 00:53:36 -- event/cpu_locks.sh@122 -- # locks_exist 117273 00:11:02.060 00:53:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:02.060 00:53:36 -- event/cpu_locks.sh@22 -- # lslocks -p 117273 00:11:02.318 00:53:36 -- event/cpu_locks.sh@124 -- # killprocess 117273 00:11:02.318 00:53:36 -- common/autotest_common.sh@936 -- # '[' -z 117273 ']' 00:11:02.318 00:53:36 -- common/autotest_common.sh@940 -- # kill -0 117273 00:11:02.318 00:53:36 -- common/autotest_common.sh@941 -- # uname 00:11:02.318 00:53:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.318 00:53:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117273 00:11:02.318 00:53:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:02.318 00:53:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:02.318 00:53:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117273' 00:11:02.318 killing process with pid 117273 00:11:02.318 00:53:36 -- common/autotest_common.sh@955 -- # kill 117273 00:11:02.318 00:53:36 -- common/autotest_common.sh@960 -- # wait 117273 00:11:02.885 00:11:02.885 real 0m2.824s 00:11:02.885 user 0m3.042s 00:11:02.885 sys 0m0.881s 00:11:02.885 00:53:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:02.885 00:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:02.885 ************************************ 00:11:02.885 END TEST locking_app_on_locked_coremask 00:11:02.885 ************************************ 00:11:02.885 00:53:37 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:02.886 00:53:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:02.886 00:53:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.886 00:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:02.886 ************************************ 00:11:02.886 START TEST locking_overlapped_coremask 00:11:02.886 ************************************ 00:11:02.886 00:53:37 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:11:02.886 00:53:37 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117339 00:11:02.886 00:53:37 -- event/cpu_locks.sh@133 -- # waitforlisten 117339 /var/tmp/spdk.sock 00:11:02.886 00:53:37 -- common/autotest_common.sh@829 -- # '[' -z 117339 ']' 00:11:02.886 00:53:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.886 00:53:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.886 00:53:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.886 00:53:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.886 00:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:02.886 00:53:37 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:03.144 [2024-11-18 00:53:37.355526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:03.144 [2024-11-18 00:53:37.355934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117339 ] 00:11:03.144 [2024-11-18 00:53:37.518763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.403 [2024-11-18 00:53:37.621366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:03.403 [2024-11-18 00:53:37.622092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.403 [2024-11-18 00:53:37.622212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.403 [2024-11-18 00:53:37.622389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.970 00:53:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.970 00:53:38 -- common/autotest_common.sh@862 -- # return 0 00:11:03.970 00:53:38 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117362 00:11:03.970 00:53:38 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117362 /var/tmp/spdk2.sock 00:11:03.970 00:53:38 -- common/autotest_common.sh@650 -- # local es=0 00:11:03.970 00:53:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 117362 /var/tmp/spdk2.sock 00:11:03.970 00:53:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:03.970 00:53:38 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:03.970 00:53:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.970 00:53:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:03.970 00:53:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.970 00:53:38 -- common/autotest_common.sh@653 -- # waitforlisten 117362 /var/tmp/spdk2.sock 00:11:03.971 00:53:38 -- common/autotest_common.sh@829 -- # '[' -z 117362 ']' 00:11:03.971 00:53:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:03.971 00:53:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.971 00:53:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:03.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:03.971 00:53:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.971 00:53:38 -- common/autotest_common.sh@10 -- # set +x 00:11:03.971 [2024-11-18 00:53:38.347842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:03.971 [2024-11-18 00:53:38.348055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117362 ] 00:11:04.230 [2024-11-18 00:53:38.525997] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117339 has claimed it. 00:11:04.230 [2024-11-18 00:53:38.538297] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:04.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (117362) - No such process 00:11:04.797 ERROR: process (pid: 117362) is no longer running 00:11:04.797 00:53:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.797 00:53:38 -- common/autotest_common.sh@862 -- # return 1 00:11:04.797 00:53:38 -- common/autotest_common.sh@653 -- # es=1 00:11:04.797 00:53:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:04.797 00:53:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:04.797 00:53:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:04.797 00:53:38 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:04.797 00:53:38 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:04.797 00:53:38 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:04.797 00:53:38 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:04.797 00:53:38 -- event/cpu_locks.sh@141 -- # killprocess 117339 00:11:04.797 00:53:38 -- common/autotest_common.sh@936 -- # '[' -z 117339 ']' 00:11:04.797 00:53:38 -- common/autotest_common.sh@940 -- # kill -0 117339 00:11:04.797 00:53:38 -- common/autotest_common.sh@941 -- # uname 00:11:04.797 00:53:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.797 00:53:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117339 00:11:04.797 00:53:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:04.797 00:53:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:04.797 00:53:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117339' 00:11:04.797 killing process with pid 117339 00:11:04.797 00:53:39 -- common/autotest_common.sh@955 -- # kill 117339 00:11:04.797 00:53:39 -- common/autotest_common.sh@960 -- # wait 117339 00:11:05.366 00:11:05.366 real 0m2.471s 00:11:05.366 user 0m6.233s 00:11:05.366 sys 0m0.760s 00:11:05.366 00:53:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.366 00:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:05.366 ************************************ 00:11:05.366 END TEST locking_overlapped_coremask 00:11:05.366 ************************************ 00:11:05.624 00:53:39 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:05.624 00:53:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.624 00:53:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.624 00:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:05.624 ************************************ 00:11:05.624 START TEST locking_overlapped_coremask_via_rpc 00:11:05.624 ************************************ 00:11:05.624 00:53:39 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:11:05.624 00:53:39 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117419 00:11:05.624 00:53:39 -- event/cpu_locks.sh@149 -- # waitforlisten 117419 /var/tmp/spdk.sock 00:11:05.624 00:53:39 -- common/autotest_common.sh@829 -- # '[' -z 117419 ']' 00:11:05.624 00:53:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.624 00:53:39 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:05.624 00:53:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.625 00:53:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.625 00:53:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.625 00:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:05.625 [2024-11-18 00:53:39.900191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:05.625 [2024-11-18 00:53:39.900492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117419 ] 00:11:05.883 [2024-11-18 00:53:40.067198] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:05.883 [2024-11-18 00:53:40.067713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.883 [2024-11-18 00:53:40.160737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:05.883 [2024-11-18 00:53:40.161415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.883 [2024-11-18 00:53:40.161639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.883 [2024-11-18 00:53:40.161571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.451 00:53:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.451 00:53:40 -- common/autotest_common.sh@862 -- # return 0 00:11:06.451 00:53:40 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117442 00:11:06.451 00:53:40 -- event/cpu_locks.sh@153 -- # waitforlisten 117442 /var/tmp/spdk2.sock 00:11:06.451 00:53:40 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:06.451 00:53:40 -- common/autotest_common.sh@829 -- # '[' -z 117442 ']' 00:11:06.451 00:53:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:06.451 00:53:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.451 00:53:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:06.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:06.451 00:53:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.451 00:53:40 -- common/autotest_common.sh@10 -- # set +x 00:11:06.710 [2024-11-18 00:53:40.896621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:06.710 [2024-11-18 00:53:40.896895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117442 ] 00:11:06.710 [2024-11-18 00:53:41.069611] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:06.710 [2024-11-18 00:53:41.069700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.969 [2024-11-18 00:53:41.264013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.969 [2024-11-18 00:53:41.264802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.969 [2024-11-18 00:53:41.264914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.969 [2024-11-18 00:53:41.264928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:08.343 00:53:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.343 00:53:42 -- common/autotest_common.sh@862 -- # return 0 00:11:08.343 00:53:42 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:08.343 00:53:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.343 00:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:08.343 00:53:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.343 00:53:42 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:08.343 00:53:42 -- common/autotest_common.sh@650 -- # local es=0 00:11:08.343 00:53:42 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:08.343 00:53:42 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:08.343 00:53:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.343 00:53:42 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:08.343 00:53:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.343 00:53:42 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:08.343 00:53:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.343 00:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:08.343 [2024-11-18 00:53:42.614524] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117419 has claimed it. 00:11:08.343 request: 00:11:08.343 { 00:11:08.343 "method": "framework_enable_cpumask_locks", 00:11:08.343 "req_id": 1 00:11:08.343 } 00:11:08.343 Got JSON-RPC error response 00:11:08.343 response: 00:11:08.343 { 00:11:08.343 "code": -32603, 00:11:08.343 "message": "Failed to claim CPU core: 2" 00:11:08.343 } 00:11:08.343 00:53:42 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:08.343 00:53:42 -- common/autotest_common.sh@653 -- # es=1 00:11:08.343 00:53:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:08.343 00:53:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:08.343 00:53:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:08.343 00:53:42 -- event/cpu_locks.sh@158 -- # waitforlisten 117419 /var/tmp/spdk.sock 00:11:08.343 00:53:42 -- common/autotest_common.sh@829 -- # '[' -z 117419 ']' 00:11:08.343 00:53:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.343 00:53:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.343 00:53:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.343 00:53:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.343 00:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:08.602 00:53:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.602 00:53:42 -- common/autotest_common.sh@862 -- # return 0 00:11:08.602 00:53:42 -- event/cpu_locks.sh@159 -- # waitforlisten 117442 /var/tmp/spdk2.sock 00:11:08.602 00:53:42 -- common/autotest_common.sh@829 -- # '[' -z 117442 ']' 00:11:08.602 00:53:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:08.602 00:53:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.602 00:53:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:08.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:08.602 00:53:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.602 00:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:08.860 00:53:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.860 00:53:43 -- common/autotest_common.sh@862 -- # return 0 00:11:08.860 00:53:43 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:08.860 00:53:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:08.860 00:53:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:08.860 00:53:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:08.860 00:11:08.860 real 0m3.360s 00:11:08.860 user 0m1.506s 00:11:08.860 sys 0m0.306s 00:11:08.860 00:53:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.860 00:53:43 -- common/autotest_common.sh@10 -- # set +x 00:11:08.860 ************************************ 00:11:08.860 END TEST locking_overlapped_coremask_via_rpc 00:11:08.860 ************************************ 00:11:08.860 00:53:43 -- event/cpu_locks.sh@174 -- # cleanup 00:11:08.860 00:53:43 -- event/cpu_locks.sh@15 -- # [[ -z 117419 ]] 00:11:08.860 00:53:43 -- event/cpu_locks.sh@15 -- # killprocess 117419 00:11:08.860 00:53:43 -- common/autotest_common.sh@936 -- # '[' -z 117419 ']' 00:11:08.860 00:53:43 -- common/autotest_common.sh@940 -- # kill -0 117419 00:11:08.860 00:53:43 -- common/autotest_common.sh@941 -- # uname 00:11:08.860 00:53:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.860 00:53:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117419 00:11:08.860 00:53:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.860 00:53:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.860 00:53:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117419' 00:11:08.860 killing process with pid 117419 00:11:08.860 00:53:43 -- common/autotest_common.sh@955 -- # kill 117419 00:11:08.860 00:53:43 -- common/autotest_common.sh@960 -- # wait 117419 00:11:09.796 00:53:43 -- event/cpu_locks.sh@16 -- # [[ -z 117442 ]] 00:11:09.796 00:53:43 -- event/cpu_locks.sh@16 -- # killprocess 117442 00:11:09.796 00:53:43 -- common/autotest_common.sh@936 -- # '[' -z 117442 ']' 00:11:09.796 00:53:43 -- common/autotest_common.sh@940 -- # kill -0 117442 00:11:09.796 00:53:43 -- common/autotest_common.sh@941 -- # uname 00:11:09.796 00:53:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:09.796 00:53:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117442 00:11:09.796 00:53:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:09.796 00:53:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:09.796 00:53:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117442' 00:11:09.796 killing process with pid 117442 00:11:09.796 00:53:44 -- common/autotest_common.sh@955 -- # kill 117442 00:11:09.796 00:53:44 -- common/autotest_common.sh@960 -- # wait 117442 00:11:10.363 00:53:44 -- event/cpu_locks.sh@18 -- # rm -f 00:11:10.363 00:53:44 -- event/cpu_locks.sh@1 -- # cleanup 00:11:10.363 00:53:44 -- event/cpu_locks.sh@15 -- # [[ -z 117419 ]] 00:11:10.363 00:53:44 -- event/cpu_locks.sh@15 -- # killprocess 117419 00:11:10.363 00:53:44 -- common/autotest_common.sh@936 -- # '[' -z 117419 ']' 00:11:10.363 00:53:44 -- common/autotest_common.sh@940 -- # kill -0 117419 00:11:10.363 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (117419) - No such process 00:11:10.363 Process with pid 117419 is not found 00:11:10.363 00:53:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 117419 is not found' 00:11:10.363 00:53:44 -- event/cpu_locks.sh@16 -- # [[ -z 117442 ]] 00:11:10.363 00:53:44 -- event/cpu_locks.sh@16 -- # killprocess 117442 00:11:10.363 00:53:44 -- common/autotest_common.sh@936 -- # '[' -z 117442 ']' 00:11:10.363 00:53:44 -- common/autotest_common.sh@940 -- # kill -0 117442 00:11:10.363 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (117442) - No such process 00:11:10.363 Process with pid 117442 is not found 00:11:10.363 00:53:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 117442 is not found' 00:11:10.363 00:53:44 -- event/cpu_locks.sh@18 -- # rm -f 00:11:10.363 00:11:10.363 real 0m25.005s 00:11:10.363 user 0m43.241s 00:11:10.363 sys 0m7.910s 00:11:10.363 00:53:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.363 00:53:44 -- common/autotest_common.sh@10 -- # set +x 00:11:10.363 ************************************ 00:11:10.363 END TEST cpu_locks 00:11:10.363 ************************************ 00:11:10.623 00:11:10.623 real 0m53.843s 00:11:10.623 user 1m39.783s 00:11:10.623 sys 0m13.030s 00:11:10.623 00:53:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.623 00:53:44 -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 ************************************ 00:11:10.623 END TEST event 00:11:10.623 ************************************ 00:11:10.623 00:53:44 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:10.623 00:53:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.623 00:53:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.623 00:53:44 -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 ************************************ 00:11:10.623 START TEST thread 00:11:10.623 ************************************ 00:11:10.623 00:53:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:10.623 * Looking for test storage... 00:11:10.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:10.623 00:53:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:10.623 00:53:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:10.623 00:53:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:10.882 00:53:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:10.882 00:53:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:10.882 00:53:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:10.882 00:53:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:10.882 00:53:45 -- scripts/common.sh@335 -- # IFS=.-: 00:11:10.882 00:53:45 -- scripts/common.sh@335 -- # read -ra ver1 00:11:10.882 00:53:45 -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.882 00:53:45 -- scripts/common.sh@336 -- # read -ra ver2 00:11:10.882 00:53:45 -- scripts/common.sh@337 -- # local 'op=<' 00:11:10.882 00:53:45 -- scripts/common.sh@339 -- # ver1_l=2 00:11:10.882 00:53:45 -- scripts/common.sh@340 -- # ver2_l=1 00:11:10.882 00:53:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:10.882 00:53:45 -- scripts/common.sh@343 -- # case "$op" in 00:11:10.882 00:53:45 -- scripts/common.sh@344 -- # : 1 00:11:10.882 00:53:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:10.882 00:53:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.882 00:53:45 -- scripts/common.sh@364 -- # decimal 1 00:11:10.882 00:53:45 -- scripts/common.sh@352 -- # local d=1 00:11:10.882 00:53:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.882 00:53:45 -- scripts/common.sh@354 -- # echo 1 00:11:10.882 00:53:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:10.882 00:53:45 -- scripts/common.sh@365 -- # decimal 2 00:11:10.882 00:53:45 -- scripts/common.sh@352 -- # local d=2 00:11:10.882 00:53:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.882 00:53:45 -- scripts/common.sh@354 -- # echo 2 00:11:10.882 00:53:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:10.882 00:53:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:10.882 00:53:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:10.882 00:53:45 -- scripts/common.sh@367 -- # return 0 00:11:10.882 00:53:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.882 00:53:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.882 --rc genhtml_branch_coverage=1 00:11:10.882 --rc genhtml_function_coverage=1 00:11:10.882 --rc genhtml_legend=1 00:11:10.882 --rc geninfo_all_blocks=1 00:11:10.882 --rc geninfo_unexecuted_blocks=1 00:11:10.882 00:11:10.882 ' 00:11:10.882 00:53:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.882 --rc genhtml_branch_coverage=1 00:11:10.882 --rc genhtml_function_coverage=1 00:11:10.882 --rc genhtml_legend=1 00:11:10.882 --rc geninfo_all_blocks=1 00:11:10.882 --rc geninfo_unexecuted_blocks=1 00:11:10.882 00:11:10.882 ' 00:11:10.882 00:53:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.882 --rc genhtml_branch_coverage=1 00:11:10.882 --rc genhtml_function_coverage=1 00:11:10.882 --rc genhtml_legend=1 00:11:10.882 --rc geninfo_all_blocks=1 00:11:10.882 --rc geninfo_unexecuted_blocks=1 00:11:10.882 00:11:10.882 ' 00:11:10.882 00:53:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.882 --rc genhtml_branch_coverage=1 00:11:10.882 --rc genhtml_function_coverage=1 00:11:10.882 --rc genhtml_legend=1 00:11:10.882 --rc geninfo_all_blocks=1 00:11:10.882 --rc geninfo_unexecuted_blocks=1 00:11:10.882 00:11:10.882 ' 00:11:10.882 00:53:45 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:10.882 00:53:45 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:10.882 00:53:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.882 00:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:10.882 ************************************ 00:11:10.882 START TEST thread_poller_perf 00:11:10.882 ************************************ 00:11:10.882 00:53:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:10.882 [2024-11-18 00:53:45.132123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:10.882 [2024-11-18 00:53:45.132528] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117606 ] 00:11:11.140 [2024-11-18 00:53:45.294072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.140 [2024-11-18 00:53:45.392869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.140 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:12.513 [2024-11-18T00:53:46.912Z] ====================================== 00:11:12.513 [2024-11-18T00:53:46.912Z] busy:2115127928 (cyc) 00:11:12.513 [2024-11-18T00:53:46.912Z] total_run_count: 317000 00:11:12.513 [2024-11-18T00:53:46.912Z] tsc_hz: 2100000000 (cyc) 00:11:12.513 [2024-11-18T00:53:46.912Z] ====================================== 00:11:12.513 [2024-11-18T00:53:46.912Z] poller_cost: 6672 (cyc), 3177 (nsec) 00:11:12.513 00:11:12.513 real 0m1.509s 00:11:12.513 user 0m1.278s 00:11:12.513 sys 0m0.130s 00:11:12.513 00:53:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:12.513 00:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:12.513 ************************************ 00:11:12.513 END TEST thread_poller_perf 00:11:12.513 ************************************ 00:11:12.513 00:53:46 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:12.513 00:53:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:12.513 00:53:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.513 00:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:12.513 ************************************ 00:11:12.513 START TEST thread_poller_perf 00:11:12.513 ************************************ 00:11:12.513 00:53:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:12.513 [2024-11-18 00:53:46.699493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:12.513 [2024-11-18 00:53:46.699804] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117650 ] 00:11:12.513 [2024-11-18 00:53:46.858771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.771 [2024-11-18 00:53:46.950981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.771 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:14.144 [2024-11-18T00:53:48.543Z] ====================================== 00:11:14.144 [2024-11-18T00:53:48.543Z] busy:2105479774 (cyc) 00:11:14.144 [2024-11-18T00:53:48.543Z] total_run_count: 4713000 00:11:14.144 [2024-11-18T00:53:48.543Z] tsc_hz: 2100000000 (cyc) 00:11:14.144 [2024-11-18T00:53:48.543Z] ====================================== 00:11:14.144 [2024-11-18T00:53:48.543Z] poller_cost: 446 (cyc), 212 (nsec) 00:11:14.144 00:11:14.144 real 0m1.485s 00:11:14.144 user 0m1.262s 00:11:14.144 sys 0m0.122s 00:11:14.144 00:53:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:14.144 00:53:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.144 ************************************ 00:11:14.144 END TEST thread_poller_perf 00:11:14.144 ************************************ 00:11:14.144 00:53:48 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:14.144 00:53:48 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:14.144 00:53:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.144 00:53:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.144 00:53:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.144 ************************************ 00:11:14.144 START TEST thread_spdk_lock 00:11:14.144 ************************************ 00:11:14.144 00:53:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:14.144 [2024-11-18 00:53:48.252703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:14.144 [2024-11-18 00:53:48.252930] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117686 ] 00:11:14.144 [2024-11-18 00:53:48.405496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:14.144 [2024-11-18 00:53:48.499276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.144 [2024-11-18 00:53:48.499280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.710 [2024-11-18 00:53:49.031199] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:14.710 [2024-11-18 00:53:49.031578] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:14.710 [2024-11-18 00:53:49.031680] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x5651b433d980 00:11:14.710 [2024-11-18 00:53:49.033537] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:14.710 [2024-11-18 00:53:49.033746] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:14.710 [2024-11-18 00:53:49.033897] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:14.968 Starting test contend 00:11:14.968 Worker Delay Wait us Hold us Total us 00:11:14.968 0 3 119821 199920 319742 00:11:14.968 1 5 63653 297577 361231 00:11:14.968 PASS test contend 00:11:14.968 Starting test hold_by_poller 00:11:14.968 PASS test hold_by_poller 00:11:14.968 Starting test hold_by_message 00:11:14.968 PASS test hold_by_message 00:11:14.968 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:14.968 100014 assertions passed 00:11:14.968 0 assertions failed 00:11:14.968 00:11:14.968 real 0m1.009s 00:11:14.968 user 0m1.315s 00:11:14.968 sys 0m0.128s 00:11:14.968 00:53:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:14.968 00:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:14.968 ************************************ 00:11:14.968 END TEST thread_spdk_lock 00:11:14.968 ************************************ 00:11:14.968 00:11:14.968 real 0m4.409s 00:11:14.968 user 0m4.069s 00:11:14.968 sys 0m0.589s 00:11:14.968 00:53:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:14.968 ************************************ 00:11:14.968 END TEST thread 00:11:14.968 00:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:14.968 ************************************ 00:11:14.968 00:53:49 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:14.968 00:53:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.968 00:53:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.968 00:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:14.968 ************************************ 00:11:14.968 START TEST accel 00:11:14.968 ************************************ 00:11:14.968 00:53:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:15.227 * Looking for test storage... 00:11:15.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:15.227 00:53:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:15.227 00:53:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:15.227 00:53:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:15.227 00:53:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:15.227 00:53:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:15.227 00:53:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:15.227 00:53:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:15.227 00:53:49 -- scripts/common.sh@335 -- # IFS=.-: 00:11:15.227 00:53:49 -- scripts/common.sh@335 -- # read -ra ver1 00:11:15.227 00:53:49 -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.227 00:53:49 -- scripts/common.sh@336 -- # read -ra ver2 00:11:15.227 00:53:49 -- scripts/common.sh@337 -- # local 'op=<' 00:11:15.227 00:53:49 -- scripts/common.sh@339 -- # ver1_l=2 00:11:15.227 00:53:49 -- scripts/common.sh@340 -- # ver2_l=1 00:11:15.227 00:53:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:15.227 00:53:49 -- scripts/common.sh@343 -- # case "$op" in 00:11:15.227 00:53:49 -- scripts/common.sh@344 -- # : 1 00:11:15.227 00:53:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:15.227 00:53:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.227 00:53:49 -- scripts/common.sh@364 -- # decimal 1 00:11:15.227 00:53:49 -- scripts/common.sh@352 -- # local d=1 00:11:15.227 00:53:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.227 00:53:49 -- scripts/common.sh@354 -- # echo 1 00:11:15.227 00:53:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:15.227 00:53:49 -- scripts/common.sh@365 -- # decimal 2 00:11:15.227 00:53:49 -- scripts/common.sh@352 -- # local d=2 00:11:15.227 00:53:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.227 00:53:49 -- scripts/common.sh@354 -- # echo 2 00:11:15.227 00:53:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:15.227 00:53:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:15.227 00:53:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:15.227 00:53:49 -- scripts/common.sh@367 -- # return 0 00:11:15.227 00:53:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.227 00:53:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.227 --rc genhtml_branch_coverage=1 00:11:15.227 --rc genhtml_function_coverage=1 00:11:15.227 --rc genhtml_legend=1 00:11:15.227 --rc geninfo_all_blocks=1 00:11:15.227 --rc geninfo_unexecuted_blocks=1 00:11:15.227 00:11:15.227 ' 00:11:15.227 00:53:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.227 --rc genhtml_branch_coverage=1 00:11:15.227 --rc genhtml_function_coverage=1 00:11:15.227 --rc genhtml_legend=1 00:11:15.227 --rc geninfo_all_blocks=1 00:11:15.227 --rc geninfo_unexecuted_blocks=1 00:11:15.227 00:11:15.227 ' 00:11:15.227 00:53:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.227 --rc genhtml_branch_coverage=1 00:11:15.227 --rc genhtml_function_coverage=1 00:11:15.227 --rc genhtml_legend=1 00:11:15.227 --rc geninfo_all_blocks=1 00:11:15.227 --rc geninfo_unexecuted_blocks=1 00:11:15.227 00:11:15.227 ' 00:11:15.227 00:53:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.227 --rc genhtml_branch_coverage=1 00:11:15.227 --rc genhtml_function_coverage=1 00:11:15.227 --rc genhtml_legend=1 00:11:15.227 --rc geninfo_all_blocks=1 00:11:15.227 --rc geninfo_unexecuted_blocks=1 00:11:15.227 00:11:15.227 ' 00:11:15.227 00:53:49 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:11:15.227 00:53:49 -- accel/accel.sh@74 -- # get_expected_opcs 00:11:15.227 00:53:49 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:15.227 00:53:49 -- accel/accel.sh@59 -- # spdk_tgt_pid=117779 00:11:15.227 00:53:49 -- accel/accel.sh@60 -- # waitforlisten 117779 00:11:15.227 00:53:49 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:15.227 00:53:49 -- common/autotest_common.sh@829 -- # '[' -z 117779 ']' 00:11:15.227 00:53:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.227 00:53:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.227 00:53:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.227 00:53:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.227 00:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 00:53:49 -- accel/accel.sh@58 -- # build_accel_config 00:11:15.227 00:53:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.227 00:53:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.228 00:53:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.228 00:53:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.228 00:53:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.228 00:53:49 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.228 00:53:49 -- accel/accel.sh@42 -- # jq -r . 00:11:15.486 [2024-11-18 00:53:49.677622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:15.486 [2024-11-18 00:53:49.677956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117779 ] 00:11:15.486 [2024-11-18 00:53:49.834137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.744 [2024-11-18 00:53:49.925546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.744 [2024-11-18 00:53:49.926016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.311 00:53:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.311 00:53:50 -- common/autotest_common.sh@862 -- # return 0 00:11:16.311 00:53:50 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:16.311 00:53:50 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:11:16.311 00:53:50 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:16.311 00:53:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.311 00:53:50 -- common/autotest_common.sh@10 -- # set +x 00:11:16.311 00:53:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.311 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.311 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.311 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.311 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.311 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.311 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.311 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.311 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.311 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.312 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.312 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.312 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.312 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.312 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.312 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.312 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.312 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.312 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.312 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.312 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.312 00:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # IFS== 00:11:16.312 00:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:11:16.312 00:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:16.312 00:53:50 -- accel/accel.sh@67 -- # killprocess 117779 00:11:16.312 00:53:50 -- common/autotest_common.sh@936 -- # '[' -z 117779 ']' 00:11:16.312 00:53:50 -- common/autotest_common.sh@940 -- # kill -0 117779 00:11:16.312 00:53:50 -- common/autotest_common.sh@941 -- # uname 00:11:16.312 00:53:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:16.312 00:53:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117779 00:11:16.312 00:53:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:16.312 00:53:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:16.312 00:53:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117779' 00:11:16.312 killing process with pid 117779 00:11:16.312 00:53:50 -- common/autotest_common.sh@955 -- # kill 117779 00:11:16.312 00:53:50 -- common/autotest_common.sh@960 -- # wait 117779 00:11:16.886 00:53:51 -- accel/accel.sh@68 -- # trap - ERR 00:11:16.886 00:53:51 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:11:16.886 00:53:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:16.886 00:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.886 00:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:16.886 00:53:51 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:11:16.886 00:53:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:16.886 00:53:51 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.886 00:53:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.886 00:53:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.886 00:53:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.886 00:53:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.886 00:53:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.886 00:53:51 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.886 00:53:51 -- accel/accel.sh@42 -- # jq -r . 00:11:17.183 00:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.183 00:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:17.183 00:53:51 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:17.183 00:53:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:17.183 00:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.183 00:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:17.183 ************************************ 00:11:17.183 START TEST accel_missing_filename 00:11:17.183 ************************************ 00:11:17.183 00:53:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:11:17.183 00:53:51 -- common/autotest_common.sh@650 -- # local es=0 00:11:17.183 00:53:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:17.183 00:53:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:11:17.183 00:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.183 00:53:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:11:17.183 00:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.183 00:53:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:11:17.183 00:53:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:17.183 00:53:51 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.183 00:53:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.183 00:53:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.183 00:53:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.183 00:53:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.183 00:53:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.183 00:53:51 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.183 00:53:51 -- accel/accel.sh@42 -- # jq -r . 00:11:17.183 [2024-11-18 00:53:51.428825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:17.183 [2024-11-18 00:53:51.429033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117842 ] 00:11:17.455 [2024-11-18 00:53:51.578191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.455 [2024-11-18 00:53:51.667458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.455 [2024-11-18 00:53:51.750514] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:17.714 [2024-11-18 00:53:51.876293] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:11:17.714 A filename is required. 00:11:17.714 00:53:52 -- common/autotest_common.sh@653 -- # es=234 00:11:17.714 00:53:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:17.714 00:53:52 -- common/autotest_common.sh@662 -- # es=106 00:11:17.714 00:53:52 -- common/autotest_common.sh@663 -- # case "$es" in 00:11:17.714 00:53:52 -- common/autotest_common.sh@670 -- # es=1 00:11:17.714 00:53:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:17.714 00:11:17.714 real 0m0.678s 00:11:17.714 user 0m0.394s 00:11:17.714 sys 0m0.231s 00:11:17.714 00:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.714 00:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:17.714 ************************************ 00:11:17.714 END TEST accel_missing_filename 00:11:17.714 ************************************ 00:11:17.973 00:53:52 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:17.973 00:53:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:11:17.973 00:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.973 00:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 ************************************ 00:11:17.973 START TEST accel_compress_verify 00:11:17.973 ************************************ 00:11:17.973 00:53:52 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:17.973 00:53:52 -- common/autotest_common.sh@650 -- # local es=0 00:11:17.973 00:53:52 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:17.973 00:53:52 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:11:17.973 00:53:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.973 00:53:52 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:11:17.973 00:53:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.973 00:53:52 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:17.973 00:53:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:17.973 00:53:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.973 00:53:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.974 00:53:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.974 00:53:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.974 00:53:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.974 00:53:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.974 00:53:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.974 00:53:52 -- accel/accel.sh@42 -- # jq -r . 00:11:17.974 [2024-11-18 00:53:52.172773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:17.974 [2024-11-18 00:53:52.173109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117881 ] 00:11:17.974 [2024-11-18 00:53:52.316829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.233 [2024-11-18 00:53:52.403212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.233 [2024-11-18 00:53:52.484698] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:18.233 [2024-11-18 00:53:52.609919] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:11:18.491 00:11:18.491 Compression does not support the verify option, aborting. 00:11:18.491 00:53:52 -- common/autotest_common.sh@653 -- # es=161 00:11:18.491 00:53:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:18.491 00:53:52 -- common/autotest_common.sh@662 -- # es=33 00:11:18.491 00:53:52 -- common/autotest_common.sh@663 -- # case "$es" in 00:11:18.491 00:53:52 -- common/autotest_common.sh@670 -- # es=1 00:11:18.491 00:53:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:18.491 00:11:18.492 real 0m0.661s 00:11:18.492 user 0m0.416s 00:11:18.492 sys 0m0.196s 00:11:18.492 00:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.492 00:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:18.492 ************************************ 00:11:18.492 END TEST accel_compress_verify 00:11:18.492 ************************************ 00:11:18.492 00:53:52 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:18.492 00:53:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:18.492 00:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.492 00:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:18.492 ************************************ 00:11:18.492 START TEST accel_wrong_workload 00:11:18.492 ************************************ 00:11:18.492 00:53:52 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:11:18.492 00:53:52 -- common/autotest_common.sh@650 -- # local es=0 00:11:18.492 00:53:52 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:18.492 00:53:52 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:11:18.492 00:53:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.492 00:53:52 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:11:18.492 00:53:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.492 00:53:52 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:11:18.492 00:53:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:18.492 00:53:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.492 00:53:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.492 00:53:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.492 00:53:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.492 00:53:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.492 00:53:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.492 00:53:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.492 00:53:52 -- accel/accel.sh@42 -- # jq -r . 00:11:18.751 Unsupported workload type: foobar 00:11:18.751 [2024-11-18 00:53:52.899592] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:18.751 accel_perf options: 00:11:18.751 [-h help message] 00:11:18.751 [-q queue depth per core] 00:11:18.751 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:18.751 [-T number of threads per core 00:11:18.751 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:18.751 [-t time in seconds] 00:11:18.751 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:18.751 [ dif_verify, , dif_generate, dif_generate_copy 00:11:18.751 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:18.751 [-l for compress/decompress workloads, name of uncompressed input file 00:11:18.751 [-S for crc32c workload, use this seed value (default 0) 00:11:18.751 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:18.751 [-f for fill workload, use this BYTE value (default 255) 00:11:18.751 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:18.751 [-y verify result if this switch is on] 00:11:18.751 [-a tasks to allocate per core (default: same value as -q)] 00:11:18.751 Can be used to spread operations across a wider range of memory. 00:11:18.751 00:53:52 -- common/autotest_common.sh@653 -- # es=1 00:11:18.751 00:53:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:18.751 00:53:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:18.751 00:53:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:18.751 00:11:18.751 real 0m0.068s 00:11:18.751 user 0m0.075s 00:11:18.751 sys 0m0.040s 00:11:18.751 00:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.751 00:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:18.751 ************************************ 00:11:18.751 END TEST accel_wrong_workload 00:11:18.751 ************************************ 00:11:18.751 00:53:52 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:18.751 00:53:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:11:18.751 00:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.751 00:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:18.751 ************************************ 00:11:18.751 START TEST accel_negative_buffers 00:11:18.751 ************************************ 00:11:18.751 00:53:52 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:18.751 00:53:52 -- common/autotest_common.sh@650 -- # local es=0 00:11:18.751 00:53:52 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:18.751 00:53:52 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:11:18.751 00:53:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.751 00:53:52 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:11:18.751 00:53:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.752 00:53:52 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:11:18.752 00:53:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:18.752 00:53:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.752 00:53:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.752 00:53:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.752 00:53:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.752 00:53:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.752 00:53:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.752 00:53:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.752 00:53:52 -- accel/accel.sh@42 -- # jq -r . 00:11:18.752 -x option must be non-negative. 00:11:18.752 [2024-11-18 00:53:53.019469] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:18.752 accel_perf options: 00:11:18.752 [-h help message] 00:11:18.752 [-q queue depth per core] 00:11:18.752 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:18.752 [-T number of threads per core 00:11:18.752 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:18.752 [-t time in seconds] 00:11:18.752 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:18.752 [ dif_verify, , dif_generate, dif_generate_copy 00:11:18.752 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:18.752 [-l for compress/decompress workloads, name of uncompressed input file 00:11:18.752 [-S for crc32c workload, use this seed value (default 0) 00:11:18.752 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:18.752 [-f for fill workload, use this BYTE value (default 255) 00:11:18.752 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:18.752 [-y verify result if this switch is on] 00:11:18.752 [-a tasks to allocate per core (default: same value as -q)] 00:11:18.752 Can be used to spread operations across a wider range of memory. 00:11:18.752 00:53:53 -- common/autotest_common.sh@653 -- # es=1 00:11:18.752 00:53:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:18.752 00:53:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:18.752 00:53:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:18.752 00:11:18.752 real 0m0.064s 00:11:18.752 user 0m0.074s 00:11:18.752 sys 0m0.034s 00:11:18.752 00:53:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.752 ************************************ 00:11:18.752 END TEST accel_negative_buffers 00:11:18.752 00:53:53 -- common/autotest_common.sh@10 -- # set +x 00:11:18.752 ************************************ 00:11:18.752 00:53:53 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:18.752 00:53:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:18.752 00:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.752 00:53:53 -- common/autotest_common.sh@10 -- # set +x 00:11:18.752 ************************************ 00:11:18.752 START TEST accel_crc32c 00:11:18.752 ************************************ 00:11:18.752 00:53:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:18.752 00:53:53 -- accel/accel.sh@16 -- # local accel_opc 00:11:18.752 00:53:53 -- accel/accel.sh@17 -- # local accel_module 00:11:18.752 00:53:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:18.752 00:53:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:18.752 00:53:53 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.752 00:53:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.752 00:53:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.752 00:53:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.752 00:53:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.752 00:53:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.752 00:53:53 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.752 00:53:53 -- accel/accel.sh@42 -- # jq -r . 00:11:18.752 [2024-11-18 00:53:53.149917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:18.752 [2024-11-18 00:53:53.150196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117962 ] 00:11:19.011 [2024-11-18 00:53:53.305933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.011 [2024-11-18 00:53:53.393048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.916 00:53:54 -- accel/accel.sh@18 -- # out=' 00:11:20.916 SPDK Configuration: 00:11:20.916 Core mask: 0x1 00:11:20.916 00:11:20.916 Accel Perf Configuration: 00:11:20.916 Workload Type: crc32c 00:11:20.916 CRC-32C seed: 32 00:11:20.916 Transfer size: 4096 bytes 00:11:20.916 Vector count 1 00:11:20.916 Module: software 00:11:20.916 Queue depth: 32 00:11:20.916 Allocate depth: 32 00:11:20.916 # threads/core: 1 00:11:20.916 Run time: 1 seconds 00:11:20.916 Verify: Yes 00:11:20.916 00:11:20.916 Running for 1 seconds... 00:11:20.916 00:11:20.916 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:20.916 ------------------------------------------------------------------------------------ 00:11:20.916 0,0 517184/s 2020 MiB/s 0 0 00:11:20.916 ==================================================================================== 00:11:20.916 Total 517184/s 2020 MiB/s 0 0' 00:11:20.916 00:53:54 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:54 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:20.916 00:53:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:20.916 00:53:54 -- accel/accel.sh@12 -- # build_accel_config 00:11:20.916 00:53:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.916 00:53:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.916 00:53:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.916 00:53:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.916 00:53:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.916 00:53:54 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.916 00:53:54 -- accel/accel.sh@42 -- # jq -r . 00:11:20.916 [2024-11-18 00:53:54.834015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:20.916 [2024-11-18 00:53:54.834326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117987 ] 00:11:20.916 [2024-11-18 00:53:54.987316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.916 [2024-11-18 00:53:55.077937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val= 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val= 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=0x1 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val= 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val= 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=crc32c 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=32 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val= 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=software 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@23 -- # accel_module=software 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=32 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=32 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=1 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val=Yes 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val= 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:20.916 00:53:55 -- accel/accel.sh@21 -- # val= 00:11:20.916 00:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # IFS=: 00:11:20.916 00:53:55 -- accel/accel.sh@20 -- # read -r var val 00:11:22.294 00:53:56 -- accel/accel.sh@21 -- # val= 00:11:22.294 00:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # IFS=: 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # read -r var val 00:11:22.294 00:53:56 -- accel/accel.sh@21 -- # val= 00:11:22.294 00:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # IFS=: 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # read -r var val 00:11:22.294 00:53:56 -- accel/accel.sh@21 -- # val= 00:11:22.294 00:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # IFS=: 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # read -r var val 00:11:22.294 00:53:56 -- accel/accel.sh@21 -- # val= 00:11:22.294 00:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # IFS=: 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # read -r var val 00:11:22.294 00:53:56 -- accel/accel.sh@21 -- # val= 00:11:22.294 00:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # IFS=: 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # read -r var val 00:11:22.294 00:53:56 -- accel/accel.sh@21 -- # val= 00:11:22.294 00:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # IFS=: 00:11:22.294 00:53:56 -- accel/accel.sh@20 -- # read -r var val 00:11:22.294 00:53:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:22.294 00:53:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:11:22.294 00:53:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.294 00:11:22.294 real 0m3.379s 00:11:22.294 user 0m2.754s 00:11:22.294 sys 0m0.447s 00:11:22.294 00:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.294 ************************************ 00:11:22.294 END TEST accel_crc32c 00:11:22.294 00:53:56 -- common/autotest_common.sh@10 -- # set +x 00:11:22.294 ************************************ 00:11:22.294 00:53:56 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:11:22.294 00:53:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:22.294 00:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.294 00:53:56 -- common/autotest_common.sh@10 -- # set +x 00:11:22.294 ************************************ 00:11:22.294 START TEST accel_crc32c_C2 00:11:22.294 ************************************ 00:11:22.294 00:53:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:11:22.294 00:53:56 -- accel/accel.sh@16 -- # local accel_opc 00:11:22.294 00:53:56 -- accel/accel.sh@17 -- # local accel_module 00:11:22.294 00:53:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:22.295 00:53:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:22.295 00:53:56 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.295 00:53:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.295 00:53:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.295 00:53:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.295 00:53:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.295 00:53:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.295 00:53:56 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.295 00:53:56 -- accel/accel.sh@42 -- # jq -r . 00:11:22.295 [2024-11-18 00:53:56.591474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:22.295 [2024-11-18 00:53:56.591763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118032 ] 00:11:22.554 [2024-11-18 00:53:56.749355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.554 [2024-11-18 00:53:56.826764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.932 00:53:58 -- accel/accel.sh@18 -- # out=' 00:11:23.932 SPDK Configuration: 00:11:23.933 Core mask: 0x1 00:11:23.933 00:11:23.933 Accel Perf Configuration: 00:11:23.933 Workload Type: crc32c 00:11:23.933 CRC-32C seed: 0 00:11:23.933 Transfer size: 4096 bytes 00:11:23.933 Vector count 2 00:11:23.933 Module: software 00:11:23.933 Queue depth: 32 00:11:23.933 Allocate depth: 32 00:11:23.933 # threads/core: 1 00:11:23.933 Run time: 1 seconds 00:11:23.933 Verify: Yes 00:11:23.933 00:11:23.933 Running for 1 seconds... 00:11:23.933 00:11:23.933 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:23.933 ------------------------------------------------------------------------------------ 00:11:23.933 0,0 408768/s 3193 MiB/s 0 0 00:11:23.933 ==================================================================================== 00:11:23.933 Total 408768/s 1596 MiB/s 0 0' 00:11:23.933 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:23.933 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:23.933 00:53:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:23.933 00:53:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:23.933 00:53:58 -- accel/accel.sh@12 -- # build_accel_config 00:11:23.933 00:53:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:23.933 00:53:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.933 00:53:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.933 00:53:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:23.933 00:53:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:23.933 00:53:58 -- accel/accel.sh@41 -- # local IFS=, 00:11:23.933 00:53:58 -- accel/accel.sh@42 -- # jq -r . 00:11:23.933 [2024-11-18 00:53:58.250628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:23.933 [2024-11-18 00:53:58.250865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118066 ] 00:11:24.192 [2024-11-18 00:53:58.394281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.192 [2024-11-18 00:53:58.506894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val= 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val= 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val=0x1 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val= 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val= 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val=crc32c 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val=0 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val= 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.451 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.451 00:53:58 -- accel/accel.sh@21 -- # val=software 00:11:24.451 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.451 00:53:58 -- accel/accel.sh@23 -- # accel_module=software 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.452 00:53:58 -- accel/accel.sh@21 -- # val=32 00:11:24.452 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.452 00:53:58 -- accel/accel.sh@21 -- # val=32 00:11:24.452 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.452 00:53:58 -- accel/accel.sh@21 -- # val=1 00:11:24.452 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.452 00:53:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:24.452 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.452 00:53:58 -- accel/accel.sh@21 -- # val=Yes 00:11:24.452 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.452 00:53:58 -- accel/accel.sh@21 -- # val= 00:11:24.452 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.452 00:53:58 -- accel/accel.sh@21 -- # val= 00:11:24.452 00:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # IFS=: 00:11:24.452 00:53:58 -- accel/accel.sh@20 -- # read -r var val 00:11:25.828 00:53:59 -- accel/accel.sh@21 -- # val= 00:11:25.828 00:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # IFS=: 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # read -r var val 00:11:25.828 00:53:59 -- accel/accel.sh@21 -- # val= 00:11:25.828 00:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # IFS=: 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # read -r var val 00:11:25.828 00:53:59 -- accel/accel.sh@21 -- # val= 00:11:25.828 00:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # IFS=: 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # read -r var val 00:11:25.828 00:53:59 -- accel/accel.sh@21 -- # val= 00:11:25.828 00:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # IFS=: 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # read -r var val 00:11:25.828 00:53:59 -- accel/accel.sh@21 -- # val= 00:11:25.828 00:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # IFS=: 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # read -r var val 00:11:25.828 00:53:59 -- accel/accel.sh@21 -- # val= 00:11:25.828 00:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # IFS=: 00:11:25.828 00:53:59 -- accel/accel.sh@20 -- # read -r var val 00:11:25.828 00:53:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:25.828 00:53:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:11:25.828 00:53:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:25.828 00:11:25.828 real 0m3.377s 00:11:25.828 user 0m2.766s 00:11:25.828 sys 0m0.437s 00:11:25.828 00:53:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:25.828 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:11:25.828 ************************************ 00:11:25.828 END TEST accel_crc32c_C2 00:11:25.828 ************************************ 00:11:25.828 00:53:59 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:11:25.828 00:53:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:25.828 00:53:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.828 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:11:25.828 ************************************ 00:11:25.828 START TEST accel_copy 00:11:25.828 ************************************ 00:11:25.828 00:53:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:11:25.828 00:53:59 -- accel/accel.sh@16 -- # local accel_opc 00:11:25.828 00:53:59 -- accel/accel.sh@17 -- # local accel_module 00:11:25.828 00:54:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:11:25.828 00:54:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.828 00:54:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:25.828 00:54:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:25.828 00:54:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.828 00:54:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.828 00:54:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:25.828 00:54:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:25.828 00:54:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:25.828 00:54:00 -- accel/accel.sh@42 -- # jq -r . 00:11:25.828 [2024-11-18 00:54:00.042555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:25.828 [2024-11-18 00:54:00.042846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118100 ] 00:11:25.828 [2024-11-18 00:54:00.201167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.088 [2024-11-18 00:54:00.282782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.465 00:54:01 -- accel/accel.sh@18 -- # out=' 00:11:27.465 SPDK Configuration: 00:11:27.465 Core mask: 0x1 00:11:27.465 00:11:27.465 Accel Perf Configuration: 00:11:27.465 Workload Type: copy 00:11:27.465 Transfer size: 4096 bytes 00:11:27.465 Vector count 1 00:11:27.465 Module: software 00:11:27.465 Queue depth: 32 00:11:27.465 Allocate depth: 32 00:11:27.465 # threads/core: 1 00:11:27.465 Run time: 1 seconds 00:11:27.465 Verify: Yes 00:11:27.465 00:11:27.465 Running for 1 seconds... 00:11:27.465 00:11:27.465 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:27.465 ------------------------------------------------------------------------------------ 00:11:27.465 0,0 340448/s 1329 MiB/s 0 0 00:11:27.465 ==================================================================================== 00:11:27.465 Total 340448/s 1329 MiB/s 0 0' 00:11:27.465 00:54:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:11:27.465 00:54:01 -- accel/accel.sh@20 -- # IFS=: 00:11:27.465 00:54:01 -- accel/accel.sh@20 -- # read -r var val 00:11:27.465 00:54:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:27.465 00:54:01 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.465 00:54:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.465 00:54:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.465 00:54:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.465 00:54:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.465 00:54:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.465 00:54:01 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.465 00:54:01 -- accel/accel.sh@42 -- # jq -r . 00:11:27.465 [2024-11-18 00:54:01.719722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:27.465 [2024-11-18 00:54:01.720019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118135 ] 00:11:27.725 [2024-11-18 00:54:01.877224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.725 [2024-11-18 00:54:01.972950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val= 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val= 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val=0x1 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val= 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val= 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val=copy 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@24 -- # accel_opc=copy 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val= 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val=software 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@23 -- # accel_module=software 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val=32 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val=32 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val=1 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val=Yes 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val= 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:27.725 00:54:02 -- accel/accel.sh@21 -- # val= 00:11:27.725 00:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # IFS=: 00:11:27.725 00:54:02 -- accel/accel.sh@20 -- # read -r var val 00:11:29.105 00:54:03 -- accel/accel.sh@21 -- # val= 00:11:29.105 00:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # IFS=: 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # read -r var val 00:11:29.105 00:54:03 -- accel/accel.sh@21 -- # val= 00:11:29.105 00:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # IFS=: 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # read -r var val 00:11:29.105 00:54:03 -- accel/accel.sh@21 -- # val= 00:11:29.105 00:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # IFS=: 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # read -r var val 00:11:29.105 00:54:03 -- accel/accel.sh@21 -- # val= 00:11:29.105 00:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # IFS=: 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # read -r var val 00:11:29.105 00:54:03 -- accel/accel.sh@21 -- # val= 00:11:29.105 00:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # IFS=: 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # read -r var val 00:11:29.105 00:54:03 -- accel/accel.sh@21 -- # val= 00:11:29.105 00:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # IFS=: 00:11:29.105 00:54:03 -- accel/accel.sh@20 -- # read -r var val 00:11:29.105 00:54:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:29.105 00:54:03 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:11:29.105 00:54:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:29.105 00:11:29.105 real 0m3.390s 00:11:29.105 user 0m2.799s 00:11:29.105 sys 0m0.426s 00:11:29.105 00:54:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:29.105 00:54:03 -- common/autotest_common.sh@10 -- # set +x 00:11:29.105 ************************************ 00:11:29.105 END TEST accel_copy 00:11:29.105 ************************************ 00:11:29.105 00:54:03 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:29.105 00:54:03 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:29.105 00:54:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.105 00:54:03 -- common/autotest_common.sh@10 -- # set +x 00:11:29.105 ************************************ 00:11:29.105 START TEST accel_fill 00:11:29.105 ************************************ 00:11:29.105 00:54:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:29.105 00:54:03 -- accel/accel.sh@16 -- # local accel_opc 00:11:29.105 00:54:03 -- accel/accel.sh@17 -- # local accel_module 00:11:29.106 00:54:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:29.106 00:54:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:29.106 00:54:03 -- accel/accel.sh@12 -- # build_accel_config 00:11:29.106 00:54:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:29.106 00:54:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.106 00:54:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.106 00:54:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:29.106 00:54:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:29.106 00:54:03 -- accel/accel.sh@41 -- # local IFS=, 00:11:29.106 00:54:03 -- accel/accel.sh@42 -- # jq -r . 00:11:29.106 [2024-11-18 00:54:03.490390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:29.106 [2024-11-18 00:54:03.490672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118175 ] 00:11:29.365 [2024-11-18 00:54:03.645593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.365 [2024-11-18 00:54:03.725264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.744 00:54:05 -- accel/accel.sh@18 -- # out=' 00:11:30.744 SPDK Configuration: 00:11:30.744 Core mask: 0x1 00:11:30.744 00:11:30.744 Accel Perf Configuration: 00:11:30.744 Workload Type: fill 00:11:30.744 Fill pattern: 0x80 00:11:30.744 Transfer size: 4096 bytes 00:11:30.744 Vector count 1 00:11:30.744 Module: software 00:11:30.744 Queue depth: 64 00:11:30.744 Allocate depth: 64 00:11:30.744 # threads/core: 1 00:11:30.744 Run time: 1 seconds 00:11:30.744 Verify: Yes 00:11:30.744 00:11:30.744 Running for 1 seconds... 00:11:30.744 00:11:30.744 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:30.744 ------------------------------------------------------------------------------------ 00:11:30.744 0,0 554816/s 2167 MiB/s 0 0 00:11:30.744 ==================================================================================== 00:11:30.744 Total 554816/s 2167 MiB/s 0 0' 00:11:30.744 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:30.744 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:30.744 00:54:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:30.744 00:54:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:30.744 00:54:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:30.744 00:54:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:30.744 00:54:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.744 00:54:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.744 00:54:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:30.744 00:54:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:30.744 00:54:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:30.744 00:54:05 -- accel/accel.sh@42 -- # jq -r . 00:11:31.004 [2024-11-18 00:54:05.161478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:31.004 [2024-11-18 00:54:05.161740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118205 ] 00:11:31.004 [2024-11-18 00:54:05.317038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.263 [2024-11-18 00:54:05.417394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val= 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val= 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val=0x1 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val= 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val= 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val=fill 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@24 -- # accel_opc=fill 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val=0x80 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.263 00:54:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:31.263 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.263 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val= 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val=software 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@23 -- # accel_module=software 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val=64 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val=64 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val=1 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val=Yes 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val= 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:31.264 00:54:05 -- accel/accel.sh@21 -- # val= 00:11:31.264 00:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # IFS=: 00:11:31.264 00:54:05 -- accel/accel.sh@20 -- # read -r var val 00:11:32.643 00:54:06 -- accel/accel.sh@21 -- # val= 00:11:32.643 00:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # IFS=: 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # read -r var val 00:11:32.643 00:54:06 -- accel/accel.sh@21 -- # val= 00:11:32.643 00:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # IFS=: 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # read -r var val 00:11:32.643 00:54:06 -- accel/accel.sh@21 -- # val= 00:11:32.643 00:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # IFS=: 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # read -r var val 00:11:32.643 00:54:06 -- accel/accel.sh@21 -- # val= 00:11:32.643 00:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # IFS=: 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # read -r var val 00:11:32.643 00:54:06 -- accel/accel.sh@21 -- # val= 00:11:32.643 00:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # IFS=: 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # read -r var val 00:11:32.643 00:54:06 -- accel/accel.sh@21 -- # val= 00:11:32.643 00:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # IFS=: 00:11:32.643 00:54:06 -- accel/accel.sh@20 -- # read -r var val 00:11:32.643 00:54:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:32.643 00:54:06 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:11:32.643 00:54:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:32.643 00:11:32.643 real 0m3.377s 00:11:32.643 user 0m2.789s 00:11:32.643 sys 0m0.421s 00:11:32.643 00:54:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:32.643 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:32.643 ************************************ 00:11:32.643 END TEST accel_fill 00:11:32.643 ************************************ 00:11:32.643 00:54:06 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:32.643 00:54:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:32.643 00:54:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.643 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:32.643 ************************************ 00:11:32.643 START TEST accel_copy_crc32c 00:11:32.643 ************************************ 00:11:32.643 00:54:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:11:32.643 00:54:06 -- accel/accel.sh@16 -- # local accel_opc 00:11:32.643 00:54:06 -- accel/accel.sh@17 -- # local accel_module 00:11:32.643 00:54:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:32.643 00:54:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:32.643 00:54:06 -- accel/accel.sh@12 -- # build_accel_config 00:11:32.643 00:54:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:32.643 00:54:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.643 00:54:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.643 00:54:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:32.643 00:54:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:32.643 00:54:06 -- accel/accel.sh@41 -- # local IFS=, 00:11:32.643 00:54:06 -- accel/accel.sh@42 -- # jq -r . 00:11:32.643 [2024-11-18 00:54:06.939230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:32.643 [2024-11-18 00:54:06.940400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118245 ] 00:11:32.902 [2024-11-18 00:54:07.142981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.902 [2024-11-18 00:54:07.241850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.281 00:54:08 -- accel/accel.sh@18 -- # out=' 00:11:34.281 SPDK Configuration: 00:11:34.281 Core mask: 0x1 00:11:34.281 00:11:34.281 Accel Perf Configuration: 00:11:34.281 Workload Type: copy_crc32c 00:11:34.281 CRC-32C seed: 0 00:11:34.281 Vector size: 4096 bytes 00:11:34.281 Transfer size: 4096 bytes 00:11:34.281 Vector count 1 00:11:34.281 Module: software 00:11:34.281 Queue depth: 32 00:11:34.281 Allocate depth: 32 00:11:34.281 # threads/core: 1 00:11:34.281 Run time: 1 seconds 00:11:34.281 Verify: Yes 00:11:34.281 00:11:34.281 Running for 1 seconds... 00:11:34.281 00:11:34.281 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:34.281 ------------------------------------------------------------------------------------ 00:11:34.281 0,0 265792/s 1038 MiB/s 0 0 00:11:34.281 ==================================================================================== 00:11:34.281 Total 265792/s 1038 MiB/s 0 0' 00:11:34.281 00:54:08 -- accel/accel.sh@20 -- # IFS=: 00:11:34.281 00:54:08 -- accel/accel.sh@20 -- # read -r var val 00:11:34.281 00:54:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:34.281 00:54:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:34.281 00:54:08 -- accel/accel.sh@12 -- # build_accel_config 00:11:34.281 00:54:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:34.281 00:54:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.281 00:54:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.281 00:54:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:34.281 00:54:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:34.281 00:54:08 -- accel/accel.sh@41 -- # local IFS=, 00:11:34.281 00:54:08 -- accel/accel.sh@42 -- # jq -r . 00:11:34.540 [2024-11-18 00:54:08.684831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:34.540 [2024-11-18 00:54:08.685108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118280 ] 00:11:34.540 [2024-11-18 00:54:08.841190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.540 [2024-11-18 00:54:08.932867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.799 00:54:09 -- accel/accel.sh@21 -- # val= 00:11:34.799 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.799 00:54:09 -- accel/accel.sh@21 -- # val= 00:11:34.799 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.799 00:54:09 -- accel/accel.sh@21 -- # val=0x1 00:11:34.799 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.799 00:54:09 -- accel/accel.sh@21 -- # val= 00:11:34.799 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.799 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.799 00:54:09 -- accel/accel.sh@21 -- # val= 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val=0 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val= 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val=software 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@23 -- # accel_module=software 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val=32 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val=32 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val=1 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val=Yes 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val= 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:34.800 00:54:09 -- accel/accel.sh@21 -- # val= 00:11:34.800 00:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # IFS=: 00:11:34.800 00:54:09 -- accel/accel.sh@20 -- # read -r var val 00:11:36.176 00:54:10 -- accel/accel.sh@21 -- # val= 00:11:36.176 00:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # IFS=: 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # read -r var val 00:11:36.176 00:54:10 -- accel/accel.sh@21 -- # val= 00:11:36.176 00:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # IFS=: 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # read -r var val 00:11:36.176 00:54:10 -- accel/accel.sh@21 -- # val= 00:11:36.176 00:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # IFS=: 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # read -r var val 00:11:36.176 00:54:10 -- accel/accel.sh@21 -- # val= 00:11:36.176 00:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # IFS=: 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # read -r var val 00:11:36.176 00:54:10 -- accel/accel.sh@21 -- # val= 00:11:36.176 00:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # IFS=: 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # read -r var val 00:11:36.176 00:54:10 -- accel/accel.sh@21 -- # val= 00:11:36.176 00:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # IFS=: 00:11:36.176 00:54:10 -- accel/accel.sh@20 -- # read -r var val 00:11:36.176 00:54:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:36.176 00:54:10 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:36.176 00:54:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:36.176 00:11:36.176 real 0m3.447s 00:11:36.176 user 0m2.852s 00:11:36.176 sys 0m0.435s 00:11:36.176 00:54:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:36.177 ************************************ 00:11:36.177 END TEST accel_copy_crc32c 00:11:36.177 ************************************ 00:11:36.177 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:11:36.177 00:54:10 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:36.177 00:54:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:36.177 00:54:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.177 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:11:36.177 ************************************ 00:11:36.177 START TEST accel_copy_crc32c_C2 00:11:36.177 ************************************ 00:11:36.177 00:54:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:36.177 00:54:10 -- accel/accel.sh@16 -- # local accel_opc 00:11:36.177 00:54:10 -- accel/accel.sh@17 -- # local accel_module 00:11:36.177 00:54:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:36.177 00:54:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:36.177 00:54:10 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.177 00:54:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.177 00:54:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.177 00:54:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.177 00:54:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.177 00:54:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.177 00:54:10 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.177 00:54:10 -- accel/accel.sh@42 -- # jq -r . 00:11:36.177 [2024-11-18 00:54:10.440570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:36.177 [2024-11-18 00:54:10.440851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118325 ] 00:11:36.436 [2024-11-18 00:54:10.600941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.437 [2024-11-18 00:54:10.676843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.848 00:54:12 -- accel/accel.sh@18 -- # out=' 00:11:37.848 SPDK Configuration: 00:11:37.848 Core mask: 0x1 00:11:37.848 00:11:37.848 Accel Perf Configuration: 00:11:37.848 Workload Type: copy_crc32c 00:11:37.848 CRC-32C seed: 0 00:11:37.848 Vector size: 4096 bytes 00:11:37.848 Transfer size: 8192 bytes 00:11:37.848 Vector count 2 00:11:37.848 Module: software 00:11:37.848 Queue depth: 32 00:11:37.848 Allocate depth: 32 00:11:37.848 # threads/core: 1 00:11:37.848 Run time: 1 seconds 00:11:37.848 Verify: Yes 00:11:37.848 00:11:37.848 Running for 1 seconds... 00:11:37.848 00:11:37.848 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:37.848 ------------------------------------------------------------------------------------ 00:11:37.848 0,0 191136/s 1493 MiB/s 0 0 00:11:37.848 ==================================================================================== 00:11:37.848 Total 191136/s 746 MiB/s 0 0' 00:11:37.848 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:37.848 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:37.848 00:54:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:37.848 00:54:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:37.848 00:54:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:37.849 00:54:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:37.849 00:54:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.849 00:54:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.849 00:54:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:37.849 00:54:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:37.849 00:54:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:37.849 00:54:12 -- accel/accel.sh@42 -- # jq -r . 00:11:37.849 [2024-11-18 00:54:12.123609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:37.849 [2024-11-18 00:54:12.123899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118348 ] 00:11:38.126 [2024-11-18 00:54:12.280134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.126 [2024-11-18 00:54:12.368898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val= 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val= 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val=0x1 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val= 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val= 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val=0 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val='8192 bytes' 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val= 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val=software 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@23 -- # accel_module=software 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val=32 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val=32 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.126 00:54:12 -- accel/accel.sh@21 -- # val=1 00:11:38.126 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.126 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.127 00:54:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:38.127 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.127 00:54:12 -- accel/accel.sh@21 -- # val=Yes 00:11:38.127 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.127 00:54:12 -- accel/accel.sh@21 -- # val= 00:11:38.127 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:38.127 00:54:12 -- accel/accel.sh@21 -- # val= 00:11:38.127 00:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # IFS=: 00:11:38.127 00:54:12 -- accel/accel.sh@20 -- # read -r var val 00:11:39.505 00:54:13 -- accel/accel.sh@21 -- # val= 00:11:39.505 00:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # IFS=: 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # read -r var val 00:11:39.505 00:54:13 -- accel/accel.sh@21 -- # val= 00:11:39.505 00:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # IFS=: 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # read -r var val 00:11:39.505 00:54:13 -- accel/accel.sh@21 -- # val= 00:11:39.505 00:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # IFS=: 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # read -r var val 00:11:39.505 00:54:13 -- accel/accel.sh@21 -- # val= 00:11:39.505 00:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # IFS=: 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # read -r var val 00:11:39.505 00:54:13 -- accel/accel.sh@21 -- # val= 00:11:39.505 00:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # IFS=: 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # read -r var val 00:11:39.505 00:54:13 -- accel/accel.sh@21 -- # val= 00:11:39.505 00:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # IFS=: 00:11:39.505 00:54:13 -- accel/accel.sh@20 -- # read -r var val 00:11:39.505 00:54:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:39.505 00:54:13 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:39.505 00:54:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:39.505 00:11:39.505 real 0m3.375s 00:11:39.505 user 0m2.785s 00:11:39.505 sys 0m0.430s 00:11:39.505 00:54:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:39.505 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:11:39.505 ************************************ 00:11:39.505 END TEST accel_copy_crc32c_C2 00:11:39.505 ************************************ 00:11:39.505 00:54:13 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:39.505 00:54:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:39.505 00:54:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:39.505 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:11:39.505 ************************************ 00:11:39.505 START TEST accel_dualcast 00:11:39.505 ************************************ 00:11:39.505 00:54:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:11:39.505 00:54:13 -- accel/accel.sh@16 -- # local accel_opc 00:11:39.505 00:54:13 -- accel/accel.sh@17 -- # local accel_module 00:11:39.505 00:54:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:11:39.505 00:54:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:39.505 00:54:13 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.505 00:54:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:39.505 00:54:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.505 00:54:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.505 00:54:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:39.505 00:54:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:39.505 00:54:13 -- accel/accel.sh@41 -- # local IFS=, 00:11:39.505 00:54:13 -- accel/accel.sh@42 -- # jq -r . 00:11:39.505 [2024-11-18 00:54:13.876303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:39.505 [2024-11-18 00:54:13.876609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118393 ] 00:11:39.765 [2024-11-18 00:54:14.017876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.765 [2024-11-18 00:54:14.089891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.143 00:54:15 -- accel/accel.sh@18 -- # out=' 00:11:41.143 SPDK Configuration: 00:11:41.143 Core mask: 0x1 00:11:41.143 00:11:41.143 Accel Perf Configuration: 00:11:41.143 Workload Type: dualcast 00:11:41.143 Transfer size: 4096 bytes 00:11:41.143 Vector count 1 00:11:41.143 Module: software 00:11:41.143 Queue depth: 32 00:11:41.143 Allocate depth: 32 00:11:41.143 # threads/core: 1 00:11:41.143 Run time: 1 seconds 00:11:41.143 Verify: Yes 00:11:41.143 00:11:41.143 Running for 1 seconds... 00:11:41.143 00:11:41.143 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:41.143 ------------------------------------------------------------------------------------ 00:11:41.143 0,0 380736/s 1487 MiB/s 0 0 00:11:41.143 ==================================================================================== 00:11:41.143 Total 380736/s 1487 MiB/s 0 0' 00:11:41.143 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.143 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.143 00:54:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:41.143 00:54:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:41.143 00:54:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:41.143 00:54:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:41.143 00:54:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.143 00:54:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.143 00:54:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:41.143 00:54:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:41.143 00:54:15 -- accel/accel.sh@41 -- # local IFS=, 00:11:41.144 00:54:15 -- accel/accel.sh@42 -- # jq -r . 00:11:41.144 [2024-11-18 00:54:15.519386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.144 [2024-11-18 00:54:15.519658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118425 ] 00:11:41.402 [2024-11-18 00:54:15.676441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.402 [2024-11-18 00:54:15.762256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val= 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val= 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val=0x1 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val= 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val= 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val=dualcast 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val= 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val=software 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@23 -- # accel_module=software 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val=32 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val=32 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val=1 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val=Yes 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val= 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:41.662 00:54:15 -- accel/accel.sh@21 -- # val= 00:11:41.662 00:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # IFS=: 00:11:41.662 00:54:15 -- accel/accel.sh@20 -- # read -r var val 00:11:43.040 00:54:17 -- accel/accel.sh@21 -- # val= 00:11:43.040 00:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # IFS=: 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # read -r var val 00:11:43.040 00:54:17 -- accel/accel.sh@21 -- # val= 00:11:43.040 00:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # IFS=: 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # read -r var val 00:11:43.040 00:54:17 -- accel/accel.sh@21 -- # val= 00:11:43.040 00:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # IFS=: 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # read -r var val 00:11:43.040 00:54:17 -- accel/accel.sh@21 -- # val= 00:11:43.040 00:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # IFS=: 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # read -r var val 00:11:43.040 00:54:17 -- accel/accel.sh@21 -- # val= 00:11:43.040 00:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # IFS=: 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # read -r var val 00:11:43.040 00:54:17 -- accel/accel.sh@21 -- # val= 00:11:43.040 00:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # IFS=: 00:11:43.040 00:54:17 -- accel/accel.sh@20 -- # read -r var val 00:11:43.040 00:54:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:43.040 00:54:17 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:43.040 00:54:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:43.040 00:11:43.040 real 0m3.341s 00:11:43.040 user 0m2.724s 00:11:43.040 sys 0m0.439s 00:11:43.040 00:54:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:43.040 00:54:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.040 ************************************ 00:11:43.040 END TEST accel_dualcast 00:11:43.040 ************************************ 00:11:43.040 00:54:17 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:43.040 00:54:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:43.040 00:54:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.040 00:54:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.040 ************************************ 00:11:43.040 START TEST accel_compare 00:11:43.040 ************************************ 00:11:43.040 00:54:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:11:43.040 00:54:17 -- accel/accel.sh@16 -- # local accel_opc 00:11:43.041 00:54:17 -- accel/accel.sh@17 -- # local accel_module 00:11:43.041 00:54:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:43.041 00:54:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:43.041 00:54:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.041 00:54:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:43.041 00:54:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.041 00:54:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.041 00:54:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:43.041 00:54:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:43.041 00:54:17 -- accel/accel.sh@41 -- # local IFS=, 00:11:43.041 00:54:17 -- accel/accel.sh@42 -- # jq -r . 00:11:43.041 [2024-11-18 00:54:17.286009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:43.041 [2024-11-18 00:54:17.286296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118463 ] 00:11:43.299 [2024-11-18 00:54:17.441276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.299 [2024-11-18 00:54:17.519467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.677 00:54:18 -- accel/accel.sh@18 -- # out=' 00:11:44.677 SPDK Configuration: 00:11:44.677 Core mask: 0x1 00:11:44.677 00:11:44.677 Accel Perf Configuration: 00:11:44.677 Workload Type: compare 00:11:44.677 Transfer size: 4096 bytes 00:11:44.677 Vector count 1 00:11:44.677 Module: software 00:11:44.677 Queue depth: 32 00:11:44.677 Allocate depth: 32 00:11:44.677 # threads/core: 1 00:11:44.677 Run time: 1 seconds 00:11:44.677 Verify: Yes 00:11:44.677 00:11:44.677 Running for 1 seconds... 00:11:44.677 00:11:44.677 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:44.677 ------------------------------------------------------------------------------------ 00:11:44.677 0,0 515168/s 2012 MiB/s 0 0 00:11:44.677 ==================================================================================== 00:11:44.677 Total 515168/s 2012 MiB/s 0 0' 00:11:44.677 00:54:18 -- accel/accel.sh@20 -- # IFS=: 00:11:44.677 00:54:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:44.677 00:54:18 -- accel/accel.sh@20 -- # read -r var val 00:11:44.677 00:54:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:44.677 00:54:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:44.677 00:54:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:44.677 00:54:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:44.677 00:54:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:44.677 00:54:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:44.677 00:54:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:44.677 00:54:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:44.677 00:54:18 -- accel/accel.sh@42 -- # jq -r . 00:11:44.677 [2024-11-18 00:54:18.956982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:44.677 [2024-11-18 00:54:18.957324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118493 ] 00:11:44.936 [2024-11-18 00:54:19.113863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.936 [2024-11-18 00:54:19.203982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val= 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val= 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val=0x1 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val= 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val= 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val=compare 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val= 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val=software 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@23 -- # accel_module=software 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val=32 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val=32 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val=1 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val=Yes 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val= 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:44.936 00:54:19 -- accel/accel.sh@21 -- # val= 00:11:44.936 00:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # IFS=: 00:11:44.936 00:54:19 -- accel/accel.sh@20 -- # read -r var val 00:11:46.312 00:54:20 -- accel/accel.sh@21 -- # val= 00:11:46.312 00:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:46.312 00:54:20 -- accel/accel.sh@21 -- # val= 00:11:46.312 00:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:46.312 00:54:20 -- accel/accel.sh@21 -- # val= 00:11:46.312 00:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:46.312 00:54:20 -- accel/accel.sh@21 -- # val= 00:11:46.312 00:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:46.312 00:54:20 -- accel/accel.sh@21 -- # val= 00:11:46.312 00:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:46.312 00:54:20 -- accel/accel.sh@21 -- # val= 00:11:46.312 00:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:46.312 00:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:46.313 00:54:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:46.313 00:54:20 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:46.313 00:54:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:46.313 00:11:46.313 real 0m3.386s 00:11:46.313 user 0m2.802s 00:11:46.313 sys 0m0.425s 00:11:46.313 00:54:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.313 00:54:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.313 ************************************ 00:11:46.313 END TEST accel_compare 00:11:46.313 ************************************ 00:11:46.313 00:54:20 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:46.313 00:54:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:46.313 00:54:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.313 00:54:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.313 ************************************ 00:11:46.313 START TEST accel_xor 00:11:46.313 ************************************ 00:11:46.313 00:54:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:11:46.313 00:54:20 -- accel/accel.sh@16 -- # local accel_opc 00:11:46.313 00:54:20 -- accel/accel.sh@17 -- # local accel_module 00:11:46.313 00:54:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:46.313 00:54:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.313 00:54:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:46.572 00:54:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:46.572 00:54:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.572 00:54:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.572 00:54:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:46.572 00:54:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:46.572 00:54:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:46.572 00:54:20 -- accel/accel.sh@42 -- # jq -r . 00:11:46.572 [2024-11-18 00:54:20.742862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:46.572 [2024-11-18 00:54:20.743159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118538 ] 00:11:46.572 [2024-11-18 00:54:20.900519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.831 [2024-11-18 00:54:21.000062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.208 00:54:22 -- accel/accel.sh@18 -- # out=' 00:11:48.208 SPDK Configuration: 00:11:48.208 Core mask: 0x1 00:11:48.208 00:11:48.208 Accel Perf Configuration: 00:11:48.208 Workload Type: xor 00:11:48.208 Source buffers: 2 00:11:48.208 Transfer size: 4096 bytes 00:11:48.208 Vector count 1 00:11:48.208 Module: software 00:11:48.208 Queue depth: 32 00:11:48.208 Allocate depth: 32 00:11:48.208 # threads/core: 1 00:11:48.208 Run time: 1 seconds 00:11:48.208 Verify: Yes 00:11:48.208 00:11:48.208 Running for 1 seconds... 00:11:48.208 00:11:48.208 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:48.208 ------------------------------------------------------------------------------------ 00:11:48.209 0,0 340576/s 1330 MiB/s 0 0 00:11:48.209 ==================================================================================== 00:11:48.209 Total 340576/s 1330 MiB/s 0 0' 00:11:48.209 00:54:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:48.209 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.209 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.209 00:54:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:48.209 00:54:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.209 00:54:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.209 00:54:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.209 00:54:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.209 00:54:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.209 00:54:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.209 00:54:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.209 00:54:22 -- accel/accel.sh@42 -- # jq -r . 00:11:48.209 [2024-11-18 00:54:22.449169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:48.209 [2024-11-18 00:54:22.449459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118566 ] 00:11:48.468 [2024-11-18 00:54:22.613827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.468 [2024-11-18 00:54:22.705845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val= 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val= 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=0x1 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val= 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val= 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=xor 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=2 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val= 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=software 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@23 -- # accel_module=software 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=32 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=32 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=1 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val=Yes 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val= 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:48.468 00:54:22 -- accel/accel.sh@21 -- # val= 00:11:48.468 00:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:48.468 00:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:49.847 00:54:24 -- accel/accel.sh@21 -- # val= 00:11:49.847 00:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:49.847 00:54:24 -- accel/accel.sh@21 -- # val= 00:11:49.847 00:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:49.847 00:54:24 -- accel/accel.sh@21 -- # val= 00:11:49.847 00:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:49.847 00:54:24 -- accel/accel.sh@21 -- # val= 00:11:49.847 00:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:49.847 00:54:24 -- accel/accel.sh@21 -- # val= 00:11:49.847 00:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:49.847 00:54:24 -- accel/accel.sh@21 -- # val= 00:11:49.847 00:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:49.847 00:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:49.847 00:54:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:49.847 00:54:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:49.847 00:54:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:49.847 00:11:49.847 real 0m3.431s 00:11:49.847 user 0m2.829s 00:11:49.847 sys 0m0.441s 00:11:49.847 00:54:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:49.847 00:54:24 -- common/autotest_common.sh@10 -- # set +x 00:11:49.847 ************************************ 00:11:49.847 END TEST accel_xor 00:11:49.847 ************************************ 00:11:49.847 00:54:24 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:49.847 00:54:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:49.847 00:54:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.847 00:54:24 -- common/autotest_common.sh@10 -- # set +x 00:11:49.847 ************************************ 00:11:49.847 START TEST accel_xor 00:11:49.847 ************************************ 00:11:49.847 00:54:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:11:49.847 00:54:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:49.847 00:54:24 -- accel/accel.sh@17 -- # local accel_module 00:11:49.848 00:54:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:49.848 00:54:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:49.848 00:54:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:49.848 00:54:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:49.848 00:54:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:49.848 00:54:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:49.848 00:54:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:49.848 00:54:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:49.848 00:54:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:49.848 00:54:24 -- accel/accel.sh@42 -- # jq -r . 00:11:49.848 [2024-11-18 00:54:24.240173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:49.848 [2024-11-18 00:54:24.240464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118607 ] 00:11:50.107 [2024-11-18 00:54:24.397153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.107 [2024-11-18 00:54:24.473075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.486 00:54:25 -- accel/accel.sh@18 -- # out=' 00:11:51.486 SPDK Configuration: 00:11:51.486 Core mask: 0x1 00:11:51.486 00:11:51.486 Accel Perf Configuration: 00:11:51.486 Workload Type: xor 00:11:51.486 Source buffers: 3 00:11:51.486 Transfer size: 4096 bytes 00:11:51.486 Vector count 1 00:11:51.486 Module: software 00:11:51.486 Queue depth: 32 00:11:51.486 Allocate depth: 32 00:11:51.486 # threads/core: 1 00:11:51.486 Run time: 1 seconds 00:11:51.486 Verify: Yes 00:11:51.486 00:11:51.486 Running for 1 seconds... 00:11:51.486 00:11:51.486 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:51.486 ------------------------------------------------------------------------------------ 00:11:51.486 0,0 322176/s 1258 MiB/s 0 0 00:11:51.486 ==================================================================================== 00:11:51.486 Total 322176/s 1258 MiB/s 0 0' 00:11:51.745 00:54:25 -- accel/accel.sh@20 -- # IFS=: 00:11:51.745 00:54:25 -- accel/accel.sh@20 -- # read -r var val 00:11:51.745 00:54:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:51.745 00:54:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:51.745 00:54:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:51.745 00:54:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:51.745 00:54:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:51.745 00:54:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:51.745 00:54:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:51.745 00:54:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:51.745 00:54:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:51.745 00:54:25 -- accel/accel.sh@42 -- # jq -r . 00:11:51.745 [2024-11-18 00:54:25.925158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:51.745 [2024-11-18 00:54:25.925445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118643 ] 00:11:51.745 [2024-11-18 00:54:26.081295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.004 [2024-11-18 00:54:26.173028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val= 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val= 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=0x1 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val= 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val= 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=xor 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=3 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val= 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=software 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@23 -- # accel_module=software 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=32 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=32 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=1 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val=Yes 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val= 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:52.004 00:54:26 -- accel/accel.sh@21 -- # val= 00:11:52.004 00:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:52.004 00:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:53.381 00:54:27 -- accel/accel.sh@21 -- # val= 00:11:53.381 00:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # IFS=: 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # read -r var val 00:11:53.381 00:54:27 -- accel/accel.sh@21 -- # val= 00:11:53.381 00:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # IFS=: 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # read -r var val 00:11:53.381 00:54:27 -- accel/accel.sh@21 -- # val= 00:11:53.381 00:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # IFS=: 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # read -r var val 00:11:53.381 00:54:27 -- accel/accel.sh@21 -- # val= 00:11:53.381 00:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # IFS=: 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # read -r var val 00:11:53.381 00:54:27 -- accel/accel.sh@21 -- # val= 00:11:53.381 00:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # IFS=: 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # read -r var val 00:11:53.381 00:54:27 -- accel/accel.sh@21 -- # val= 00:11:53.381 00:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # IFS=: 00:11:53.381 00:54:27 -- accel/accel.sh@20 -- # read -r var val 00:11:53.381 00:54:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:53.381 00:54:27 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:53.381 00:54:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:53.381 00:11:53.381 real 0m3.384s 00:11:53.381 user 0m2.778s 00:11:53.381 sys 0m0.443s 00:11:53.382 00:54:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:53.382 ************************************ 00:11:53.382 END TEST accel_xor 00:11:53.382 ************************************ 00:11:53.382 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 00:54:27 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:53.382 00:54:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:53.382 00:54:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:53.382 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 ************************************ 00:11:53.382 START TEST accel_dif_verify 00:11:53.382 ************************************ 00:11:53.382 00:54:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:11:53.382 00:54:27 -- accel/accel.sh@16 -- # local accel_opc 00:11:53.382 00:54:27 -- accel/accel.sh@17 -- # local accel_module 00:11:53.382 00:54:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:53.382 00:54:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:53.382 00:54:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:53.382 00:54:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:53.382 00:54:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:53.382 00:54:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:53.382 00:54:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:53.382 00:54:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:53.382 00:54:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:53.382 00:54:27 -- accel/accel.sh@42 -- # jq -r . 00:11:53.382 [2024-11-18 00:54:27.690914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:53.382 [2024-11-18 00:54:27.691203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118676 ] 00:11:53.640 [2024-11-18 00:54:27.847866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.640 [2024-11-18 00:54:27.922268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.037 00:54:29 -- accel/accel.sh@18 -- # out=' 00:11:55.037 SPDK Configuration: 00:11:55.037 Core mask: 0x1 00:11:55.037 00:11:55.037 Accel Perf Configuration: 00:11:55.037 Workload Type: dif_verify 00:11:55.037 Vector size: 4096 bytes 00:11:55.037 Transfer size: 4096 bytes 00:11:55.037 Block size: 512 bytes 00:11:55.037 Metadata size: 8 bytes 00:11:55.037 Vector count 1 00:11:55.037 Module: software 00:11:55.037 Queue depth: 32 00:11:55.037 Allocate depth: 32 00:11:55.037 # threads/core: 1 00:11:55.037 Run time: 1 seconds 00:11:55.037 Verify: No 00:11:55.037 00:11:55.037 Running for 1 seconds... 00:11:55.037 00:11:55.037 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:55.037 ------------------------------------------------------------------------------------ 00:11:55.037 0,0 119264/s 473 MiB/s 0 0 00:11:55.037 ==================================================================================== 00:11:55.037 Total 119264/s 465 MiB/s 0 0' 00:11:55.037 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.037 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.037 00:54:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:55.037 00:54:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:55.037 00:54:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:55.037 00:54:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:55.037 00:54:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:55.037 00:54:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:55.037 00:54:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:55.037 00:54:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:55.037 00:54:29 -- accel/accel.sh@41 -- # local IFS=, 00:11:55.037 00:54:29 -- accel/accel.sh@42 -- # jq -r . 00:11:55.037 [2024-11-18 00:54:29.356767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:55.037 [2024-11-18 00:54:29.357053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118712 ] 00:11:55.367 [2024-11-18 00:54:29.513035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.367 [2024-11-18 00:54:29.601383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val= 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val= 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val=0x1 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val= 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val= 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val=dif_verify 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val= 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val=software 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@23 -- # accel_module=software 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val=32 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val=32 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val=1 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val=No 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val= 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:55.367 00:54:29 -- accel/accel.sh@21 -- # val= 00:11:55.367 00:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # IFS=: 00:11:55.367 00:54:29 -- accel/accel.sh@20 -- # read -r var val 00:11:56.824 00:54:30 -- accel/accel.sh@21 -- # val= 00:11:56.824 00:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.824 00:54:30 -- accel/accel.sh@20 -- # IFS=: 00:11:56.824 00:54:30 -- accel/accel.sh@20 -- # read -r var val 00:11:56.824 00:54:30 -- accel/accel.sh@21 -- # val= 00:11:56.824 00:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.824 00:54:30 -- accel/accel.sh@20 -- # IFS=: 00:11:56.824 00:54:30 -- accel/accel.sh@20 -- # read -r var val 00:11:56.824 00:54:31 -- accel/accel.sh@21 -- # val= 00:11:56.824 00:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # IFS=: 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # read -r var val 00:11:56.824 00:54:31 -- accel/accel.sh@21 -- # val= 00:11:56.824 00:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # IFS=: 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # read -r var val 00:11:56.824 00:54:31 -- accel/accel.sh@21 -- # val= 00:11:56.824 00:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # IFS=: 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # read -r var val 00:11:56.824 00:54:31 -- accel/accel.sh@21 -- # val= 00:11:56.824 00:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # IFS=: 00:11:56.824 00:54:31 -- accel/accel.sh@20 -- # read -r var val 00:11:56.824 00:54:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:56.824 00:54:31 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:56.824 00:54:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:56.824 00:11:56.824 real 0m3.358s 00:11:56.824 user 0m2.800s 00:11:56.824 sys 0m0.403s 00:11:56.824 00:54:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.824 00:54:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 ************************************ 00:11:56.824 END TEST accel_dif_verify 00:11:56.824 ************************************ 00:11:56.824 00:54:31 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:56.824 00:54:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:56.824 00:54:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.824 00:54:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 ************************************ 00:11:56.824 START TEST accel_dif_generate 00:11:56.824 ************************************ 00:11:56.824 00:54:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:11:56.824 00:54:31 -- accel/accel.sh@16 -- # local accel_opc 00:11:56.824 00:54:31 -- accel/accel.sh@17 -- # local accel_module 00:11:56.824 00:54:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:56.824 00:54:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:56.824 00:54:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:56.824 00:54:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:56.824 00:54:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:56.824 00:54:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:56.824 00:54:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:56.824 00:54:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:56.824 00:54:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:56.824 00:54:31 -- accel/accel.sh@42 -- # jq -r . 00:11:56.824 [2024-11-18 00:54:31.113423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.824 [2024-11-18 00:54:31.114465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118752 ] 00:11:57.082 [2024-11-18 00:54:31.276478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.082 [2024-11-18 00:54:31.352048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.458 00:54:32 -- accel/accel.sh@18 -- # out=' 00:11:58.458 SPDK Configuration: 00:11:58.458 Core mask: 0x1 00:11:58.458 00:11:58.458 Accel Perf Configuration: 00:11:58.458 Workload Type: dif_generate 00:11:58.458 Vector size: 4096 bytes 00:11:58.458 Transfer size: 4096 bytes 00:11:58.458 Block size: 512 bytes 00:11:58.458 Metadata size: 8 bytes 00:11:58.458 Vector count 1 00:11:58.458 Module: software 00:11:58.458 Queue depth: 32 00:11:58.458 Allocate depth: 32 00:11:58.458 # threads/core: 1 00:11:58.458 Run time: 1 seconds 00:11:58.458 Verify: No 00:11:58.458 00:11:58.458 Running for 1 seconds... 00:11:58.458 00:11:58.458 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:58.458 ------------------------------------------------------------------------------------ 00:11:58.458 0,0 140448/s 557 MiB/s 0 0 00:11:58.458 ==================================================================================== 00:11:58.458 Total 140448/s 548 MiB/s 0 0' 00:11:58.458 00:54:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:58.458 00:54:32 -- accel/accel.sh@20 -- # IFS=: 00:11:58.458 00:54:32 -- accel/accel.sh@20 -- # read -r var val 00:11:58.458 00:54:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:58.458 00:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:58.458 00:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:58.458 00:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:58.458 00:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:58.458 00:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:58.458 00:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:58.458 00:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:58.458 00:54:32 -- accel/accel.sh@42 -- # jq -r . 00:11:58.458 [2024-11-18 00:54:32.783665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:58.458 [2024-11-18 00:54:32.784759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118780 ] 00:11:58.717 [2024-11-18 00:54:32.940754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.717 [2024-11-18 00:54:33.029380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val= 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val= 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val=0x1 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val= 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val= 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val=dif_generate 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.976 00:54:33 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:58.976 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.976 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val= 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val=software 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@23 -- # accel_module=software 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val=32 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val=32 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val=1 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val=No 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val= 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.977 00:54:33 -- accel/accel.sh@21 -- # val= 00:11:58.977 00:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # IFS=: 00:11:58.977 00:54:33 -- accel/accel.sh@20 -- # read -r var val 00:12:00.355 00:54:34 -- accel/accel.sh@21 -- # val= 00:12:00.355 00:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # IFS=: 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # read -r var val 00:12:00.355 00:54:34 -- accel/accel.sh@21 -- # val= 00:12:00.355 00:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # IFS=: 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # read -r var val 00:12:00.355 00:54:34 -- accel/accel.sh@21 -- # val= 00:12:00.355 00:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # IFS=: 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # read -r var val 00:12:00.355 00:54:34 -- accel/accel.sh@21 -- # val= 00:12:00.355 00:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # IFS=: 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # read -r var val 00:12:00.355 00:54:34 -- accel/accel.sh@21 -- # val= 00:12:00.355 00:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # IFS=: 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # read -r var val 00:12:00.355 00:54:34 -- accel/accel.sh@21 -- # val= 00:12:00.355 00:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # IFS=: 00:12:00.355 00:54:34 -- accel/accel.sh@20 -- # read -r var val 00:12:00.355 00:54:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:00.355 00:54:34 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:12:00.355 00:54:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:00.355 00:12:00.355 real 0m3.361s 00:12:00.355 user 0m2.756s 00:12:00.355 sys 0m0.441s 00:12:00.355 00:54:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:00.355 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:12:00.355 ************************************ 00:12:00.355 END TEST accel_dif_generate 00:12:00.355 ************************************ 00:12:00.355 00:54:34 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:00.355 00:54:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:00.355 00:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:00.355 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:12:00.355 ************************************ 00:12:00.355 START TEST accel_dif_generate_copy 00:12:00.355 ************************************ 00:12:00.355 00:54:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:12:00.355 00:54:34 -- accel/accel.sh@16 -- # local accel_opc 00:12:00.355 00:54:34 -- accel/accel.sh@17 -- # local accel_module 00:12:00.355 00:54:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:12:00.355 00:54:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:00.355 00:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:12:00.355 00:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:00.355 00:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.355 00:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.355 00:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:00.355 00:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:00.355 00:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:12:00.355 00:54:34 -- accel/accel.sh@42 -- # jq -r . 00:12:00.355 [2024-11-18 00:54:34.541758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:00.355 [2024-11-18 00:54:34.542020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118820 ] 00:12:00.355 [2024-11-18 00:54:34.695906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.615 [2024-11-18 00:54:34.766718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.991 00:54:36 -- accel/accel.sh@18 -- # out=' 00:12:01.991 SPDK Configuration: 00:12:01.991 Core mask: 0x1 00:12:01.991 00:12:01.991 Accel Perf Configuration: 00:12:01.991 Workload Type: dif_generate_copy 00:12:01.991 Vector size: 4096 bytes 00:12:01.991 Transfer size: 4096 bytes 00:12:01.991 Vector count 1 00:12:01.991 Module: software 00:12:01.991 Queue depth: 32 00:12:01.991 Allocate depth: 32 00:12:01.991 # threads/core: 1 00:12:01.991 Run time: 1 seconds 00:12:01.991 Verify: No 00:12:01.991 00:12:01.991 Running for 1 seconds... 00:12:01.991 00:12:01.991 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:01.991 ------------------------------------------------------------------------------------ 00:12:01.991 0,0 109216/s 433 MiB/s 0 0 00:12:01.991 ==================================================================================== 00:12:01.991 Total 109216/s 426 MiB/s 0 0' 00:12:01.991 00:54:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:01.991 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:01.991 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:01.991 00:54:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:01.991 00:54:36 -- accel/accel.sh@12 -- # build_accel_config 00:12:01.991 00:54:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:01.991 00:54:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.991 00:54:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.991 00:54:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:01.991 00:54:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:01.991 00:54:36 -- accel/accel.sh@41 -- # local IFS=, 00:12:01.991 00:54:36 -- accel/accel.sh@42 -- # jq -r . 00:12:01.991 [2024-11-18 00:54:36.200586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:01.992 [2024-11-18 00:54:36.200882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118857 ] 00:12:01.992 [2024-11-18 00:54:36.356435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.250 [2024-11-18 00:54:36.445281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val= 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val= 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val=0x1 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val= 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val= 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val= 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val=software 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@23 -- # accel_module=software 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val=32 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val=32 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val=1 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val=No 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val= 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:02.250 00:54:36 -- accel/accel.sh@21 -- # val= 00:12:02.250 00:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:02.250 00:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:03.626 00:54:37 -- accel/accel.sh@21 -- # val= 00:12:03.626 00:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:03.626 00:54:37 -- accel/accel.sh@21 -- # val= 00:12:03.626 00:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:03.626 00:54:37 -- accel/accel.sh@21 -- # val= 00:12:03.626 00:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:03.626 00:54:37 -- accel/accel.sh@21 -- # val= 00:12:03.626 00:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:03.626 00:54:37 -- accel/accel.sh@21 -- # val= 00:12:03.626 00:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:03.626 00:54:37 -- accel/accel.sh@21 -- # val= 00:12:03.626 00:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:03.626 00:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:03.626 00:54:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:03.626 00:54:37 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:12:03.626 00:54:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:03.626 00:12:03.626 real 0m3.348s 00:12:03.626 user 0m2.721s 00:12:03.626 sys 0m0.452s 00:12:03.626 00:54:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:03.626 00:54:37 -- common/autotest_common.sh@10 -- # set +x 00:12:03.626 ************************************ 00:12:03.626 END TEST accel_dif_generate_copy 00:12:03.626 ************************************ 00:12:03.626 00:54:37 -- accel/accel.sh@107 -- # [[ y == y ]] 00:12:03.626 00:54:37 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:03.626 00:54:37 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:03.626 00:54:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.626 00:54:37 -- common/autotest_common.sh@10 -- # set +x 00:12:03.626 ************************************ 00:12:03.626 START TEST accel_comp 00:12:03.626 ************************************ 00:12:03.626 00:54:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:03.626 00:54:37 -- accel/accel.sh@16 -- # local accel_opc 00:12:03.626 00:54:37 -- accel/accel.sh@17 -- # local accel_module 00:12:03.626 00:54:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:03.626 00:54:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:03.626 00:54:37 -- accel/accel.sh@12 -- # build_accel_config 00:12:03.626 00:54:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:03.626 00:54:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.626 00:54:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.626 00:54:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:03.626 00:54:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:03.626 00:54:37 -- accel/accel.sh@41 -- # local IFS=, 00:12:03.626 00:54:37 -- accel/accel.sh@42 -- # jq -r . 00:12:03.626 [2024-11-18 00:54:37.949952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:03.626 [2024-11-18 00:54:37.950233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118895 ] 00:12:03.885 [2024-11-18 00:54:38.107352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.885 [2024-11-18 00:54:38.191317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.261 00:54:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:05.261 00:12:05.261 SPDK Configuration: 00:12:05.261 Core mask: 0x1 00:12:05.261 00:12:05.261 Accel Perf Configuration: 00:12:05.261 Workload Type: compress 00:12:05.261 Transfer size: 4096 bytes 00:12:05.261 Vector count 1 00:12:05.261 Module: software 00:12:05.261 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.261 Queue depth: 32 00:12:05.261 Allocate depth: 32 00:12:05.261 # threads/core: 1 00:12:05.261 Run time: 1 seconds 00:12:05.261 Verify: No 00:12:05.261 00:12:05.261 Running for 1 seconds... 00:12:05.261 00:12:05.261 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:05.261 ------------------------------------------------------------------------------------ 00:12:05.261 0,0 60096/s 250 MiB/s 0 0 00:12:05.261 ==================================================================================== 00:12:05.261 Total 60096/s 234 MiB/s 0 0' 00:12:05.261 00:54:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.261 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.261 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.261 00:54:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.261 00:54:39 -- accel/accel.sh@12 -- # build_accel_config 00:12:05.261 00:54:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:05.261 00:54:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.261 00:54:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.261 00:54:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:05.261 00:54:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:05.261 00:54:39 -- accel/accel.sh@41 -- # local IFS=, 00:12:05.261 00:54:39 -- accel/accel.sh@42 -- # jq -r . 00:12:05.261 [2024-11-18 00:54:39.632187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:05.261 [2024-11-18 00:54:39.632456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118925 ] 00:12:05.520 [2024-11-18 00:54:39.775270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.520 [2024-11-18 00:54:39.869497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=0x1 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=compress 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@24 -- # accel_opc=compress 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=software 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@23 -- # accel_module=software 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=32 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=32 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=1 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val=No 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:05.779 00:54:39 -- accel/accel.sh@21 -- # val= 00:12:05.779 00:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:05.779 00:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 00:54:41 -- accel/accel.sh@21 -- # val= 00:12:07.156 00:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 00:54:41 -- accel/accel.sh@21 -- # val= 00:12:07.156 00:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 00:54:41 -- accel/accel.sh@21 -- # val= 00:12:07.156 00:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 00:54:41 -- accel/accel.sh@21 -- # val= 00:12:07.156 00:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 00:54:41 -- accel/accel.sh@21 -- # val= 00:12:07.156 00:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 00:54:41 -- accel/accel.sh@21 -- # val= 00:12:07.156 00:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 00:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 00:54:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:07.156 00:54:41 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:12:07.156 00:54:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.156 00:12:07.156 real 0m3.380s 00:12:07.156 user 0m2.766s 00:12:07.156 sys 0m0.450s 00:12:07.156 00:54:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:07.156 00:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:07.156 ************************************ 00:12:07.156 END TEST accel_comp 00:12:07.156 ************************************ 00:12:07.156 00:54:41 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:07.156 00:54:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:07.156 00:54:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.156 00:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:07.156 ************************************ 00:12:07.156 START TEST accel_decomp 00:12:07.156 ************************************ 00:12:07.156 00:54:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:07.156 00:54:41 -- accel/accel.sh@16 -- # local accel_opc 00:12:07.156 00:54:41 -- accel/accel.sh@17 -- # local accel_module 00:12:07.156 00:54:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:07.156 00:54:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:07.156 00:54:41 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.156 00:54:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:07.156 00:54:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.156 00:54:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.156 00:54:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:07.156 00:54:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:07.156 00:54:41 -- accel/accel.sh@41 -- # local IFS=, 00:12:07.156 00:54:41 -- accel/accel.sh@42 -- # jq -r . 00:12:07.156 [2024-11-18 00:54:41.391359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:07.156 [2024-11-18 00:54:41.391638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118970 ] 00:12:07.156 [2024-11-18 00:54:41.548099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.415 [2024-11-18 00:54:41.622517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.792 00:54:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:08.792 00:12:08.792 SPDK Configuration: 00:12:08.792 Core mask: 0x1 00:12:08.792 00:12:08.792 Accel Perf Configuration: 00:12:08.792 Workload Type: decompress 00:12:08.792 Transfer size: 4096 bytes 00:12:08.792 Vector count 1 00:12:08.792 Module: software 00:12:08.792 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:08.792 Queue depth: 32 00:12:08.792 Allocate depth: 32 00:12:08.792 # threads/core: 1 00:12:08.792 Run time: 1 seconds 00:12:08.792 Verify: Yes 00:12:08.792 00:12:08.792 Running for 1 seconds... 00:12:08.792 00:12:08.792 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:08.792 ------------------------------------------------------------------------------------ 00:12:08.792 0,0 64480/s 118 MiB/s 0 0 00:12:08.792 ==================================================================================== 00:12:08.792 Total 64480/s 251 MiB/s 0 0' 00:12:08.792 00:54:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:08.792 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:08.792 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:08.792 00:54:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:08.792 00:54:43 -- accel/accel.sh@12 -- # build_accel_config 00:12:08.792 00:54:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:08.792 00:54:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:08.792 00:54:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:08.792 00:54:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:08.792 00:54:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:08.792 00:54:43 -- accel/accel.sh@41 -- # local IFS=, 00:12:08.792 00:54:43 -- accel/accel.sh@42 -- # jq -r . 00:12:08.792 [2024-11-18 00:54:43.062770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:08.792 [2024-11-18 00:54:43.063069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118993 ] 00:12:09.050 [2024-11-18 00:54:43.215752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.050 [2024-11-18 00:54:43.304871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=0x1 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=decompress 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=software 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@23 -- # accel_module=software 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=32 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=32 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=1 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val=Yes 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:09.050 00:54:43 -- accel/accel.sh@21 -- # val= 00:12:09.050 00:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:09.050 00:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:10.424 00:54:44 -- accel/accel.sh@21 -- # val= 00:12:10.424 00:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:10.424 00:54:44 -- accel/accel.sh@21 -- # val= 00:12:10.424 00:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:10.424 00:54:44 -- accel/accel.sh@21 -- # val= 00:12:10.424 00:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:10.424 00:54:44 -- accel/accel.sh@21 -- # val= 00:12:10.424 00:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:10.424 00:54:44 -- accel/accel.sh@21 -- # val= 00:12:10.424 00:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:10.424 00:54:44 -- accel/accel.sh@21 -- # val= 00:12:10.424 00:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:10.424 00:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:10.424 00:54:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:10.424 00:54:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:10.424 00:54:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:10.424 00:12:10.424 real 0m3.371s 00:12:10.424 user 0m2.801s 00:12:10.424 sys 0m0.411s 00:12:10.424 00:54:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:10.424 00:54:44 -- common/autotest_common.sh@10 -- # set +x 00:12:10.424 ************************************ 00:12:10.424 END TEST accel_decomp 00:12:10.424 ************************************ 00:12:10.424 00:54:44 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:10.424 00:54:44 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:10.424 00:54:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.424 00:54:44 -- common/autotest_common.sh@10 -- # set +x 00:12:10.424 ************************************ 00:12:10.424 START TEST accel_decmop_full 00:12:10.424 ************************************ 00:12:10.424 00:54:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:10.424 00:54:44 -- accel/accel.sh@16 -- # local accel_opc 00:12:10.424 00:54:44 -- accel/accel.sh@17 -- # local accel_module 00:12:10.424 00:54:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:10.424 00:54:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:10.424 00:54:44 -- accel/accel.sh@12 -- # build_accel_config 00:12:10.424 00:54:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:10.424 00:54:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.424 00:54:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.424 00:54:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:10.424 00:54:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:10.424 00:54:44 -- accel/accel.sh@41 -- # local IFS=, 00:12:10.424 00:54:44 -- accel/accel.sh@42 -- # jq -r . 00:12:10.424 [2024-11-18 00:54:44.823212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:10.424 [2024-11-18 00:54:44.823535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119038 ] 00:12:10.684 [2024-11-18 00:54:44.965043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.684 [2024-11-18 00:54:45.044621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.062 00:54:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:12.062 00:12:12.062 SPDK Configuration: 00:12:12.062 Core mask: 0x1 00:12:12.062 00:12:12.062 Accel Perf Configuration: 00:12:12.062 Workload Type: decompress 00:12:12.062 Transfer size: 111250 bytes 00:12:12.062 Vector count 1 00:12:12.062 Module: software 00:12:12.062 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:12.062 Queue depth: 32 00:12:12.062 Allocate depth: 32 00:12:12.062 # threads/core: 1 00:12:12.062 Run time: 1 seconds 00:12:12.062 Verify: Yes 00:12:12.062 00:12:12.062 Running for 1 seconds... 00:12:12.062 00:12:12.062 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:12.062 ------------------------------------------------------------------------------------ 00:12:12.062 0,0 4704/s 194 MiB/s 0 0 00:12:12.062 ==================================================================================== 00:12:12.062 Total 4704/s 499 MiB/s 0 0' 00:12:12.062 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.062 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.062 00:54:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:12.062 00:54:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:12.062 00:54:46 -- accel/accel.sh@12 -- # build_accel_config 00:12:12.062 00:54:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:12.062 00:54:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.062 00:54:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.062 00:54:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:12.062 00:54:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:12.062 00:54:46 -- accel/accel.sh@41 -- # local IFS=, 00:12:12.062 00:54:46 -- accel/accel.sh@42 -- # jq -r . 00:12:12.321 [2024-11-18 00:54:46.496568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:12.321 [2024-11-18 00:54:46.497049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119071 ] 00:12:12.321 [2024-11-18 00:54:46.655167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.580 [2024-11-18 00:54:46.748612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=0x1 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=decompress 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=software 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@23 -- # accel_module=software 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=32 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=32 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=1 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val=Yes 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:12.580 00:54:46 -- accel/accel.sh@21 -- # val= 00:12:12.580 00:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # IFS=: 00:12:12.580 00:54:46 -- accel/accel.sh@20 -- # read -r var val 00:12:13.973 00:54:48 -- accel/accel.sh@21 -- # val= 00:12:13.974 00:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # IFS=: 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # read -r var val 00:12:13.974 00:54:48 -- accel/accel.sh@21 -- # val= 00:12:13.974 00:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # IFS=: 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # read -r var val 00:12:13.974 00:54:48 -- accel/accel.sh@21 -- # val= 00:12:13.974 00:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # IFS=: 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # read -r var val 00:12:13.974 00:54:48 -- accel/accel.sh@21 -- # val= 00:12:13.974 00:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # IFS=: 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # read -r var val 00:12:13.974 00:54:48 -- accel/accel.sh@21 -- # val= 00:12:13.974 00:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # IFS=: 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # read -r var val 00:12:13.974 00:54:48 -- accel/accel.sh@21 -- # val= 00:12:13.974 00:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # IFS=: 00:12:13.974 00:54:48 -- accel/accel.sh@20 -- # read -r var val 00:12:13.974 00:54:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:13.974 00:54:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:13.974 00:54:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:13.974 00:12:13.974 real 0m3.391s 00:12:13.974 user 0m2.786s 00:12:13.974 sys 0m0.432s 00:12:13.974 00:54:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.974 00:54:48 -- common/autotest_common.sh@10 -- # set +x 00:12:13.974 ************************************ 00:12:13.974 END TEST accel_decmop_full 00:12:13.974 ************************************ 00:12:13.974 00:54:48 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:13.974 00:54:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:13.974 00:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.974 00:54:48 -- common/autotest_common.sh@10 -- # set +x 00:12:13.974 ************************************ 00:12:13.974 START TEST accel_decomp_mcore 00:12:13.974 ************************************ 00:12:13.974 00:54:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:13.974 00:54:48 -- accel/accel.sh@16 -- # local accel_opc 00:12:13.974 00:54:48 -- accel/accel.sh@17 -- # local accel_module 00:12:13.974 00:54:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:13.974 00:54:48 -- accel/accel.sh@12 -- # build_accel_config 00:12:13.974 00:54:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:13.974 00:54:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:13.974 00:54:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:13.974 00:54:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:13.974 00:54:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:13.974 00:54:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:13.974 00:54:48 -- accel/accel.sh@41 -- # local IFS=, 00:12:13.974 00:54:48 -- accel/accel.sh@42 -- # jq -r . 00:12:13.974 [2024-11-18 00:54:48.280576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:13.974 [2024-11-18 00:54:48.281027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119108 ] 00:12:14.233 [2024-11-18 00:54:48.453459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.233 [2024-11-18 00:54:48.534327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.233 [2024-11-18 00:54:48.534437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.233 [2024-11-18 00:54:48.534625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.233 [2024-11-18 00:54:48.534632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.611 00:54:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:15.611 00:12:15.611 SPDK Configuration: 00:12:15.611 Core mask: 0xf 00:12:15.611 00:12:15.611 Accel Perf Configuration: 00:12:15.611 Workload Type: decompress 00:12:15.611 Transfer size: 4096 bytes 00:12:15.611 Vector count 1 00:12:15.611 Module: software 00:12:15.611 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:15.611 Queue depth: 32 00:12:15.611 Allocate depth: 32 00:12:15.611 # threads/core: 1 00:12:15.611 Run time: 1 seconds 00:12:15.611 Verify: Yes 00:12:15.611 00:12:15.611 Running for 1 seconds... 00:12:15.611 00:12:15.611 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:15.611 ------------------------------------------------------------------------------------ 00:12:15.611 0,0 56576/s 104 MiB/s 0 0 00:12:15.611 3,0 56480/s 104 MiB/s 0 0 00:12:15.611 2,0 58368/s 107 MiB/s 0 0 00:12:15.611 1,0 57664/s 106 MiB/s 0 0 00:12:15.611 ==================================================================================== 00:12:15.611 Total 229088/s 894 MiB/s 0 0' 00:12:15.611 00:54:49 -- accel/accel.sh@20 -- # IFS=: 00:12:15.611 00:54:49 -- accel/accel.sh@20 -- # read -r var val 00:12:15.611 00:54:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:15.611 00:54:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:15.611 00:54:49 -- accel/accel.sh@12 -- # build_accel_config 00:12:15.611 00:54:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:15.611 00:54:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.611 00:54:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.611 00:54:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:15.611 00:54:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:15.611 00:54:49 -- accel/accel.sh@41 -- # local IFS=, 00:12:15.611 00:54:49 -- accel/accel.sh@42 -- # jq -r . 00:12:15.611 [2024-11-18 00:54:49.978540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:15.611 [2024-11-18 00:54:49.979413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119145 ] 00:12:15.870 [2024-11-18 00:54:50.143357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.870 [2024-11-18 00:54:50.243267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.870 [2024-11-18 00:54:50.243373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.870 [2024-11-18 00:54:50.243566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.870 [2024-11-18 00:54:50.243572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.129 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.129 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.129 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.129 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.129 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=0xf 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=decompress 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=software 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@23 -- # accel_module=software 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=32 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=32 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=1 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val=Yes 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:16.130 00:54:50 -- accel/accel.sh@21 -- # val= 00:12:16.130 00:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:16.130 00:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@21 -- # val= 00:12:17.507 00:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:17.507 00:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:17.507 00:54:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:17.507 00:54:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:17.507 00:54:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:17.507 00:12:17.507 real 0m3.437s 00:12:17.507 user 0m10.168s 00:12:17.507 sys 0m0.457s 00:12:17.508 ************************************ 00:12:17.508 END TEST accel_decomp_mcore 00:12:17.508 ************************************ 00:12:17.508 00:54:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:17.508 00:54:51 -- common/autotest_common.sh@10 -- # set +x 00:12:17.508 00:54:51 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:17.508 00:54:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:17.508 00:54:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.508 00:54:51 -- common/autotest_common.sh@10 -- # set +x 00:12:17.508 ************************************ 00:12:17.508 START TEST accel_decomp_full_mcore 00:12:17.508 ************************************ 00:12:17.508 00:54:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:17.508 00:54:51 -- accel/accel.sh@16 -- # local accel_opc 00:12:17.508 00:54:51 -- accel/accel.sh@17 -- # local accel_module 00:12:17.508 00:54:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:17.508 00:54:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:17.508 00:54:51 -- accel/accel.sh@12 -- # build_accel_config 00:12:17.508 00:54:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:17.508 00:54:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.508 00:54:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.508 00:54:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:17.508 00:54:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:17.508 00:54:51 -- accel/accel.sh@41 -- # local IFS=, 00:12:17.508 00:54:51 -- accel/accel.sh@42 -- # jq -r . 00:12:17.508 [2024-11-18 00:54:51.775530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:17.508 [2024-11-18 00:54:51.775849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119189 ] 00:12:17.765 [2024-11-18 00:54:51.937357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.765 [2024-11-18 00:54:52.010719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.765 [2024-11-18 00:54:52.010906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.765 [2024-11-18 00:54:52.011099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.765 [2024-11-18 00:54:52.011154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.141 00:54:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:19.141 00:12:19.141 SPDK Configuration: 00:12:19.141 Core mask: 0xf 00:12:19.141 00:12:19.141 Accel Perf Configuration: 00:12:19.141 Workload Type: decompress 00:12:19.141 Transfer size: 111250 bytes 00:12:19.141 Vector count 1 00:12:19.141 Module: software 00:12:19.141 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:19.141 Queue depth: 32 00:12:19.141 Allocate depth: 32 00:12:19.141 # threads/core: 1 00:12:19.141 Run time: 1 seconds 00:12:19.141 Verify: Yes 00:12:19.141 00:12:19.141 Running for 1 seconds... 00:12:19.141 00:12:19.141 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:19.141 ------------------------------------------------------------------------------------ 00:12:19.141 0,0 4800/s 198 MiB/s 0 0 00:12:19.141 3,0 4800/s 198 MiB/s 0 0 00:12:19.141 2,0 4800/s 198 MiB/s 0 0 00:12:19.141 1,0 4832/s 199 MiB/s 0 0 00:12:19.141 ==================================================================================== 00:12:19.141 Total 19232/s 2040 MiB/s 0 0' 00:12:19.141 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.141 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.141 00:54:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:19.141 00:54:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:19.141 00:54:53 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.141 00:54:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:19.141 00:54:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.141 00:54:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.141 00:54:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:19.141 00:54:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:19.141 00:54:53 -- accel/accel.sh@41 -- # local IFS=, 00:12:19.141 00:54:53 -- accel/accel.sh@42 -- # jq -r . 00:12:19.141 [2024-11-18 00:54:53.469085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:19.141 [2024-11-18 00:54:53.469575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119222 ] 00:12:19.400 [2024-11-18 00:54:53.631887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.400 [2024-11-18 00:54:53.727472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.400 [2024-11-18 00:54:53.727642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.400 [2024-11-18 00:54:53.727820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.400 [2024-11-18 00:54:53.727929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=0xf 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=decompress 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=software 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@23 -- # accel_module=software 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=32 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=32 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=1 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val=Yes 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:19.659 00:54:53 -- accel/accel.sh@21 -- # val= 00:12:19.659 00:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # IFS=: 00:12:19.659 00:54:53 -- accel/accel.sh@20 -- # read -r var val 00:12:21.036 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.036 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.036 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.036 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.036 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.036 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.036 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.037 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.037 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.037 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.037 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.037 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.037 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 00:54:55 -- accel/accel.sh@21 -- # val= 00:12:21.037 00:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:21.037 00:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:21.037 ************************************ 00:12:21.037 END TEST accel_decomp_full_mcore 00:12:21.037 ************************************ 00:12:21.037 00:54:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:21.037 00:54:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:21.037 00:54:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:21.037 00:12:21.037 real 0m3.427s 00:12:21.037 user 0m10.182s 00:12:21.037 sys 0m0.488s 00:12:21.037 00:54:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:21.037 00:54:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.037 00:54:55 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:21.037 00:54:55 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:21.037 00:54:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.037 00:54:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.037 ************************************ 00:12:21.037 START TEST accel_decomp_mthread 00:12:21.037 ************************************ 00:12:21.037 00:54:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:21.037 00:54:55 -- accel/accel.sh@16 -- # local accel_opc 00:12:21.037 00:54:55 -- accel/accel.sh@17 -- # local accel_module 00:12:21.037 00:54:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:21.037 00:54:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:21.037 00:54:55 -- accel/accel.sh@12 -- # build_accel_config 00:12:21.037 00:54:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:21.037 00:54:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.037 00:54:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:21.037 00:54:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:21.037 00:54:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:21.037 00:54:55 -- accel/accel.sh@41 -- # local IFS=, 00:12:21.037 00:54:55 -- accel/accel.sh@42 -- # jq -r . 00:12:21.037 [2024-11-18 00:54:55.279781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:21.037 [2024-11-18 00:54:55.280200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119263 ] 00:12:21.296 [2024-11-18 00:54:55.436396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.296 [2024-11-18 00:54:55.517767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.674 00:54:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:22.674 00:12:22.674 SPDK Configuration: 00:12:22.674 Core mask: 0x1 00:12:22.674 00:12:22.674 Accel Perf Configuration: 00:12:22.674 Workload Type: decompress 00:12:22.674 Transfer size: 4096 bytes 00:12:22.674 Vector count 1 00:12:22.674 Module: software 00:12:22.674 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:22.674 Queue depth: 32 00:12:22.674 Allocate depth: 32 00:12:22.674 # threads/core: 2 00:12:22.674 Run time: 1 seconds 00:12:22.674 Verify: Yes 00:12:22.674 00:12:22.674 Running for 1 seconds... 00:12:22.674 00:12:22.674 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:22.674 ------------------------------------------------------------------------------------ 00:12:22.674 0,1 34144/s 62 MiB/s 0 0 00:12:22.674 0,0 34016/s 62 MiB/s 0 0 00:12:22.674 ==================================================================================== 00:12:22.674 Total 68160/s 266 MiB/s 0 0' 00:12:22.674 00:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:22.674 00:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:22.674 00:54:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:22.674 00:54:56 -- accel/accel.sh@12 -- # build_accel_config 00:12:22.674 00:54:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:22.674 00:54:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:22.674 00:54:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.674 00:54:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:22.674 00:54:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:22.674 00:54:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:22.674 00:54:56 -- accel/accel.sh@41 -- # local IFS=, 00:12:22.674 00:54:56 -- accel/accel.sh@42 -- # jq -r . 00:12:22.674 [2024-11-18 00:54:56.964413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:22.674 [2024-11-18 00:54:56.964877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119300 ] 00:12:22.934 [2024-11-18 00:54:57.120225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.934 [2024-11-18 00:54:57.207911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=0x1 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=decompress 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=software 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@23 -- # accel_module=software 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=32 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=32 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=2 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val=Yes 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:22.934 00:54:57 -- accel/accel.sh@21 -- # val= 00:12:22.934 00:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:22.934 00:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 00:54:58 -- accel/accel.sh@21 -- # val= 00:12:24.313 00:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # IFS=: 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 00:54:58 -- accel/accel.sh@21 -- # val= 00:12:24.313 00:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # IFS=: 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 00:54:58 -- accel/accel.sh@21 -- # val= 00:12:24.313 00:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # IFS=: 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 00:54:58 -- accel/accel.sh@21 -- # val= 00:12:24.313 00:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # IFS=: 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 00:54:58 -- accel/accel.sh@21 -- # val= 00:12:24.313 00:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # IFS=: 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 00:54:58 -- accel/accel.sh@21 -- # val= 00:12:24.313 00:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # IFS=: 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 00:54:58 -- accel/accel.sh@21 -- # val= 00:12:24.313 00:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # IFS=: 00:12:24.313 00:54:58 -- accel/accel.sh@20 -- # read -r var val 00:12:24.313 ************************************ 00:12:24.313 END TEST accel_decomp_mthread 00:12:24.313 ************************************ 00:12:24.313 00:54:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:24.313 00:54:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:24.313 00:54:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:24.313 00:12:24.313 real 0m3.383s 00:12:24.313 user 0m2.775s 00:12:24.313 sys 0m0.423s 00:12:24.313 00:54:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.313 00:54:58 -- common/autotest_common.sh@10 -- # set +x 00:12:24.313 00:54:58 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.313 00:54:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:24.313 00:54:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.313 00:54:58 -- common/autotest_common.sh@10 -- # set +x 00:12:24.313 ************************************ 00:12:24.313 START TEST accel_deomp_full_mthread 00:12:24.313 ************************************ 00:12:24.313 00:54:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.313 00:54:58 -- accel/accel.sh@16 -- # local accel_opc 00:12:24.313 00:54:58 -- accel/accel.sh@17 -- # local accel_module 00:12:24.313 00:54:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.313 00:54:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.313 00:54:58 -- accel/accel.sh@12 -- # build_accel_config 00:12:24.313 00:54:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:24.313 00:54:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:24.313 00:54:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:24.313 00:54:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:24.313 00:54:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:24.313 00:54:58 -- accel/accel.sh@41 -- # local IFS=, 00:12:24.313 00:54:58 -- accel/accel.sh@42 -- # jq -r . 00:12:24.573 [2024-11-18 00:54:58.726377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:24.573 [2024-11-18 00:54:58.726805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119340 ] 00:12:24.573 [2024-11-18 00:54:58.880031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.573 [2024-11-18 00:54:58.946625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.478 00:55:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:26.478 00:12:26.478 SPDK Configuration: 00:12:26.478 Core mask: 0x1 00:12:26.478 00:12:26.478 Accel Perf Configuration: 00:12:26.478 Workload Type: decompress 00:12:26.478 Transfer size: 111250 bytes 00:12:26.478 Vector count 1 00:12:26.478 Module: software 00:12:26.478 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.478 Queue depth: 32 00:12:26.478 Allocate depth: 32 00:12:26.478 # threads/core: 2 00:12:26.478 Run time: 1 seconds 00:12:26.478 Verify: Yes 00:12:26.478 00:12:26.478 Running for 1 seconds... 00:12:26.478 00:12:26.478 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:26.478 ------------------------------------------------------------------------------------ 00:12:26.478 0,1 2528/s 104 MiB/s 0 0 00:12:26.478 0,0 2496/s 103 MiB/s 0 0 00:12:26.478 ==================================================================================== 00:12:26.478 Total 5024/s 533 MiB/s 0 0' 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.478 00:55:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:26.478 00:55:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:26.478 00:55:00 -- accel/accel.sh@12 -- # build_accel_config 00:12:26.478 00:55:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:26.478 00:55:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.478 00:55:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.478 00:55:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:26.478 00:55:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:26.478 00:55:00 -- accel/accel.sh@41 -- # local IFS=, 00:12:26.478 00:55:00 -- accel/accel.sh@42 -- # jq -r . 00:12:26.478 [2024-11-18 00:55:00.411706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:26.478 [2024-11-18 00:55:00.412136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119368 ] 00:12:26.478 [2024-11-18 00:55:00.567881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.478 [2024-11-18 00:55:00.651051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.478 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.478 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.478 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.478 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.478 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.478 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.478 00:55:00 -- accel/accel.sh@21 -- # val=0x1 00:12:26.478 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.478 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.478 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.478 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.478 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.478 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val=decompress 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val=software 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@23 -- # accel_module=software 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val=32 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val=32 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val=2 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val=Yes 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:26.479 00:55:00 -- accel/accel.sh@21 -- # val= 00:12:26.479 00:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:26.479 00:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 00:55:02 -- accel/accel.sh@21 -- # val= 00:12:27.858 00:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # IFS=: 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 00:55:02 -- accel/accel.sh@21 -- # val= 00:12:27.858 00:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # IFS=: 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 00:55:02 -- accel/accel.sh@21 -- # val= 00:12:27.858 00:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # IFS=: 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 00:55:02 -- accel/accel.sh@21 -- # val= 00:12:27.858 00:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # IFS=: 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 00:55:02 -- accel/accel.sh@21 -- # val= 00:12:27.858 00:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # IFS=: 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 00:55:02 -- accel/accel.sh@21 -- # val= 00:12:27.858 00:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # IFS=: 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 00:55:02 -- accel/accel.sh@21 -- # val= 00:12:27.858 00:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # IFS=: 00:12:27.858 00:55:02 -- accel/accel.sh@20 -- # read -r var val 00:12:27.858 ************************************ 00:12:27.858 END TEST accel_deomp_full_mthread 00:12:27.858 ************************************ 00:12:27.858 00:55:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:27.858 00:55:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:27.858 00:55:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:27.858 00:12:27.858 real 0m3.395s 00:12:27.858 user 0m2.787s 00:12:27.858 sys 0m0.426s 00:12:27.858 00:55:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:27.858 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 00:55:02 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:27.858 00:55:02 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:27.858 00:55:02 -- accel/accel.sh@129 -- # build_accel_config 00:12:27.858 00:55:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:27.858 00:55:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.858 00:55:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:27.858 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 00:55:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:27.858 00:55:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:27.858 00:55:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:27.858 00:55:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:27.858 00:55:02 -- accel/accel.sh@41 -- # local IFS=, 00:12:27.858 00:55:02 -- accel/accel.sh@42 -- # jq -r . 00:12:27.858 ************************************ 00:12:27.858 START TEST accel_dif_functional_tests 00:12:27.858 ************************************ 00:12:27.858 00:55:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:27.858 [2024-11-18 00:55:02.239528] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:27.858 [2024-11-18 00:55:02.240023] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119415 ] 00:12:28.118 [2024-11-18 00:55:02.403506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:28.118 [2024-11-18 00:55:02.472646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.118 [2024-11-18 00:55:02.472810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.118 [2024-11-18 00:55:02.472810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.377 00:12:28.377 00:12:28.377 CUnit - A unit testing framework for C - Version 2.1-3 00:12:28.377 http://cunit.sourceforge.net/ 00:12:28.377 00:12:28.377 00:12:28.377 Suite: accel_dif 00:12:28.377 Test: verify: DIF generated, GUARD check ...passed 00:12:28.377 Test: verify: DIF generated, APPTAG check ...passed 00:12:28.377 Test: verify: DIF generated, REFTAG check ...passed 00:12:28.377 Test: verify: DIF not generated, GUARD check ...[2024-11-18 00:55:02.596312] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:28.377 [2024-11-18 00:55:02.596719] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:28.377 passed 00:12:28.377 Test: verify: DIF not generated, APPTAG check ...[2024-11-18 00:55:02.596981] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:28.377 [2024-11-18 00:55:02.597499] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:28.377 passed 00:12:28.377 Test: verify: DIF not generated, REFTAG check ...[2024-11-18 00:55:02.597731] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:28.377 [2024-11-18 00:55:02.598101] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:28.377 passed 00:12:28.377 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:28.377 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-18 00:55:02.598552] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:28.377 passed 00:12:28.377 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:28.377 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:28.377 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:28.377 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-18 00:55:02.599395] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:28.377 passed 00:12:28.378 Test: generate copy: DIF generated, GUARD check ...passed 00:12:28.378 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:28.378 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:28.378 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:28.378 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:28.378 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:28.378 Test: generate copy: iovecs-len validate ...[2024-11-18 00:55:02.600834] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:28.378 passed 00:12:28.378 Test: generate copy: buffer alignment validate ...passed 00:12:28.378 00:12:28.378 Run Summary: Type Total Ran Passed Failed Inactive 00:12:28.378 suites 1 1 n/a 0 0 00:12:28.378 tests 20 20 20 0 0 00:12:28.378 asserts 204 204 204 0 n/a 00:12:28.378 00:12:28.378 Elapsed time = 0.011 seconds 00:12:28.637 ************************************ 00:12:28.637 END TEST accel_dif_functional_tests 00:12:28.637 ************************************ 00:12:28.637 00:12:28.637 real 0m0.835s 00:12:28.637 user 0m1.108s 00:12:28.637 sys 0m0.290s 00:12:28.637 00:55:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.637 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.896 ************************************ 00:12:28.896 END TEST accel 00:12:28.896 ************************************ 00:12:28.896 00:12:28.896 real 1m13.688s 00:12:28.896 user 1m14.841s 00:12:28.896 sys 0m11.197s 00:12:28.896 00:55:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.896 00:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:28.896 00:55:03 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:28.896 00:55:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:28.896 00:55:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.896 00:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:28.896 ************************************ 00:12:28.896 START TEST accel_rpc 00:12:28.896 ************************************ 00:12:28.896 00:55:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:28.896 * Looking for test storage... 00:12:28.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:28.896 00:55:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:28.896 00:55:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:28.896 00:55:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:28.896 00:55:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:28.896 00:55:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:28.896 00:55:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:28.896 00:55:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:28.896 00:55:03 -- scripts/common.sh@335 -- # IFS=.-: 00:12:28.896 00:55:03 -- scripts/common.sh@335 -- # read -ra ver1 00:12:28.896 00:55:03 -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.896 00:55:03 -- scripts/common.sh@336 -- # read -ra ver2 00:12:28.896 00:55:03 -- scripts/common.sh@337 -- # local 'op=<' 00:12:28.896 00:55:03 -- scripts/common.sh@339 -- # ver1_l=2 00:12:28.896 00:55:03 -- scripts/common.sh@340 -- # ver2_l=1 00:12:28.896 00:55:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:28.896 00:55:03 -- scripts/common.sh@343 -- # case "$op" in 00:12:28.896 00:55:03 -- scripts/common.sh@344 -- # : 1 00:12:28.896 00:55:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:28.896 00:55:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.896 00:55:03 -- scripts/common.sh@364 -- # decimal 1 00:12:28.896 00:55:03 -- scripts/common.sh@352 -- # local d=1 00:12:28.896 00:55:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.896 00:55:03 -- scripts/common.sh@354 -- # echo 1 00:12:28.896 00:55:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:28.896 00:55:03 -- scripts/common.sh@365 -- # decimal 2 00:12:28.896 00:55:03 -- scripts/common.sh@352 -- # local d=2 00:12:28.896 00:55:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.896 00:55:03 -- scripts/common.sh@354 -- # echo 2 00:12:28.896 00:55:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:28.896 00:55:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:28.896 00:55:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:28.896 00:55:03 -- scripts/common.sh@367 -- # return 0 00:12:28.897 00:55:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.897 00:55:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:28.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.897 --rc genhtml_branch_coverage=1 00:12:28.897 --rc genhtml_function_coverage=1 00:12:28.897 --rc genhtml_legend=1 00:12:28.897 --rc geninfo_all_blocks=1 00:12:28.897 --rc geninfo_unexecuted_blocks=1 00:12:28.897 00:12:28.897 ' 00:12:28.897 00:55:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:28.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.897 --rc genhtml_branch_coverage=1 00:12:28.897 --rc genhtml_function_coverage=1 00:12:28.897 --rc genhtml_legend=1 00:12:28.897 --rc geninfo_all_blocks=1 00:12:28.897 --rc geninfo_unexecuted_blocks=1 00:12:28.897 00:12:28.897 ' 00:12:28.897 00:55:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:28.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.897 --rc genhtml_branch_coverage=1 00:12:28.897 --rc genhtml_function_coverage=1 00:12:28.897 --rc genhtml_legend=1 00:12:28.897 --rc geninfo_all_blocks=1 00:12:28.897 --rc geninfo_unexecuted_blocks=1 00:12:28.897 00:12:28.897 ' 00:12:28.897 00:55:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:28.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.897 --rc genhtml_branch_coverage=1 00:12:28.897 --rc genhtml_function_coverage=1 00:12:28.897 --rc genhtml_legend=1 00:12:28.897 --rc geninfo_all_blocks=1 00:12:28.897 --rc geninfo_unexecuted_blocks=1 00:12:28.897 00:12:28.897 ' 00:12:28.897 00:55:03 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:28.897 00:55:03 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=119501 00:12:28.897 00:55:03 -- accel/accel_rpc.sh@15 -- # waitforlisten 119501 00:12:28.897 00:55:03 -- common/autotest_common.sh@829 -- # '[' -z 119501 ']' 00:12:28.897 00:55:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.897 00:55:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.897 00:55:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.897 00:55:03 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:28.897 00:55:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.897 00:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:29.157 [2024-11-18 00:55:03.365935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:29.157 [2024-11-18 00:55:03.366238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119501 ] 00:12:29.157 [2024-11-18 00:55:03.519340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.415 [2024-11-18 00:55:03.593531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:29.416 [2024-11-18 00:55:03.593786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.983 00:55:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.983 00:55:04 -- common/autotest_common.sh@862 -- # return 0 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:29.983 00:55:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:29.983 00:55:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.983 00:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:29.983 ************************************ 00:12:29.983 START TEST accel_assign_opcode 00:12:29.983 ************************************ 00:12:29.983 00:55:04 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:29.983 00:55:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.983 00:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:29.983 [2024-11-18 00:55:04.323414] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:29.983 00:55:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:29.983 00:55:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.983 00:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:29.983 [2024-11-18 00:55:04.331381] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:29.983 00:55:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.983 00:55:04 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:29.983 00:55:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.983 00:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:30.550 00:55:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.550 00:55:04 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:30.550 00:55:04 -- accel/accel_rpc.sh@42 -- # grep software 00:12:30.550 00:55:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.550 00:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:30.550 00:55:04 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:30.550 00:55:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.550 software 00:12:30.550 00:12:30.551 real 0m0.374s 00:12:30.551 user 0m0.052s 00:12:30.551 sys 0m0.009s 00:12:30.551 00:55:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.551 00:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:30.551 ************************************ 00:12:30.551 END TEST accel_assign_opcode 00:12:30.551 ************************************ 00:12:30.551 00:55:04 -- accel/accel_rpc.sh@55 -- # killprocess 119501 00:12:30.551 00:55:04 -- common/autotest_common.sh@936 -- # '[' -z 119501 ']' 00:12:30.551 00:55:04 -- common/autotest_common.sh@940 -- # kill -0 119501 00:12:30.551 00:55:04 -- common/autotest_common.sh@941 -- # uname 00:12:30.551 00:55:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.551 00:55:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119501 00:12:30.551 00:55:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.551 00:55:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.551 00:55:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119501' 00:12:30.551 killing process with pid 119501 00:12:30.551 00:55:04 -- common/autotest_common.sh@955 -- # kill 119501 00:12:30.551 00:55:04 -- common/autotest_common.sh@960 -- # wait 119501 00:12:31.120 00:12:31.120 real 0m2.339s 00:12:31.120 user 0m2.201s 00:12:31.120 sys 0m0.719s 00:12:31.120 00:55:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.120 00:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.120 ************************************ 00:12:31.120 END TEST accel_rpc 00:12:31.120 ************************************ 00:12:31.120 00:55:05 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:31.120 00:55:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:31.120 00:55:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.120 00:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.120 ************************************ 00:12:31.120 START TEST app_cmdline 00:12:31.120 ************************************ 00:12:31.120 00:55:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:31.379 * Looking for test storage... 00:12:31.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:31.379 00:55:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.379 00:55:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.379 00:55:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.379 00:55:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.379 00:55:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.379 00:55:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.379 00:55:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.379 00:55:05 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.379 00:55:05 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.379 00:55:05 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.379 00:55:05 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.379 00:55:05 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.379 00:55:05 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.379 00:55:05 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.379 00:55:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.379 00:55:05 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.379 00:55:05 -- scripts/common.sh@344 -- # : 1 00:12:31.379 00:55:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.379 00:55:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.379 00:55:05 -- scripts/common.sh@364 -- # decimal 1 00:12:31.379 00:55:05 -- scripts/common.sh@352 -- # local d=1 00:12:31.379 00:55:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.379 00:55:05 -- scripts/common.sh@354 -- # echo 1 00:12:31.379 00:55:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.379 00:55:05 -- scripts/common.sh@365 -- # decimal 2 00:12:31.379 00:55:05 -- scripts/common.sh@352 -- # local d=2 00:12:31.379 00:55:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.379 00:55:05 -- scripts/common.sh@354 -- # echo 2 00:12:31.379 00:55:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.379 00:55:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.379 00:55:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.379 00:55:05 -- scripts/common.sh@367 -- # return 0 00:12:31.379 00:55:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.379 00:55:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.379 --rc genhtml_branch_coverage=1 00:12:31.379 --rc genhtml_function_coverage=1 00:12:31.379 --rc genhtml_legend=1 00:12:31.379 --rc geninfo_all_blocks=1 00:12:31.379 --rc geninfo_unexecuted_blocks=1 00:12:31.379 00:12:31.379 ' 00:12:31.379 00:55:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.379 --rc genhtml_branch_coverage=1 00:12:31.379 --rc genhtml_function_coverage=1 00:12:31.379 --rc genhtml_legend=1 00:12:31.379 --rc geninfo_all_blocks=1 00:12:31.379 --rc geninfo_unexecuted_blocks=1 00:12:31.379 00:12:31.379 ' 00:12:31.379 00:55:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.379 --rc genhtml_branch_coverage=1 00:12:31.379 --rc genhtml_function_coverage=1 00:12:31.379 --rc genhtml_legend=1 00:12:31.379 --rc geninfo_all_blocks=1 00:12:31.379 --rc geninfo_unexecuted_blocks=1 00:12:31.379 00:12:31.379 ' 00:12:31.379 00:55:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.379 --rc genhtml_branch_coverage=1 00:12:31.379 --rc genhtml_function_coverage=1 00:12:31.379 --rc genhtml_legend=1 00:12:31.379 --rc geninfo_all_blocks=1 00:12:31.379 --rc geninfo_unexecuted_blocks=1 00:12:31.379 00:12:31.379 ' 00:12:31.379 00:55:05 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:31.379 00:55:05 -- app/cmdline.sh@17 -- # spdk_tgt_pid=119615 00:12:31.379 00:55:05 -- app/cmdline.sh@18 -- # waitforlisten 119615 00:12:31.379 00:55:05 -- common/autotest_common.sh@829 -- # '[' -z 119615 ']' 00:12:31.379 00:55:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.379 00:55:05 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:31.379 00:55:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.379 00:55:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.379 00:55:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.379 00:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.638 [2024-11-18 00:55:05.794538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:31.638 [2024-11-18 00:55:05.795559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119615 ] 00:12:31.638 [2024-11-18 00:55:05.951544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.638 [2024-11-18 00:55:06.021169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:31.638 [2024-11-18 00:55:06.021664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.578 00:55:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.579 00:55:06 -- common/autotest_common.sh@862 -- # return 0 00:12:32.579 00:55:06 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:32.579 { 00:12:32.579 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:12:32.579 "fields": { 00:12:32.579 "major": 24, 00:12:32.579 "minor": 1, 00:12:32.579 "patch": 1, 00:12:32.579 "suffix": "-pre", 00:12:32.579 "commit": "c13c99a5e" 00:12:32.579 } 00:12:32.579 } 00:12:32.838 00:55:06 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:32.838 00:55:06 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:32.838 00:55:06 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:32.838 00:55:06 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:32.838 00:55:06 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:32.838 00:55:06 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:32.838 00:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.838 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:32.838 00:55:06 -- app/cmdline.sh@26 -- # sort 00:12:32.838 00:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.838 00:55:07 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:32.838 00:55:07 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:32.838 00:55:07 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:32.838 00:55:07 -- common/autotest_common.sh@650 -- # local es=0 00:12:32.838 00:55:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:32.838 00:55:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:32.838 00:55:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.838 00:55:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:32.838 00:55:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.838 00:55:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:32.838 00:55:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.838 00:55:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:32.838 00:55:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:32.838 00:55:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:33.098 request: 00:12:33.098 { 00:12:33.098 "method": "env_dpdk_get_mem_stats", 00:12:33.098 "req_id": 1 00:12:33.098 } 00:12:33.098 Got JSON-RPC error response 00:12:33.098 response: 00:12:33.098 { 00:12:33.098 "code": -32601, 00:12:33.098 "message": "Method not found" 00:12:33.098 } 00:12:33.098 00:55:07 -- common/autotest_common.sh@653 -- # es=1 00:12:33.098 00:55:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:33.098 00:55:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:33.098 00:55:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:33.098 00:55:07 -- app/cmdline.sh@1 -- # killprocess 119615 00:12:33.098 00:55:07 -- common/autotest_common.sh@936 -- # '[' -z 119615 ']' 00:12:33.098 00:55:07 -- common/autotest_common.sh@940 -- # kill -0 119615 00:12:33.098 00:55:07 -- common/autotest_common.sh@941 -- # uname 00:12:33.098 00:55:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.098 00:55:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119615 00:12:33.098 00:55:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:33.098 00:55:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:33.098 00:55:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119615' 00:12:33.098 killing process with pid 119615 00:12:33.098 00:55:07 -- common/autotest_common.sh@955 -- # kill 119615 00:12:33.098 00:55:07 -- common/autotest_common.sh@960 -- # wait 119615 00:12:33.667 ************************************ 00:12:33.667 END TEST app_cmdline 00:12:33.667 ************************************ 00:12:33.667 00:12:33.667 real 0m2.487s 00:12:33.667 user 0m2.757s 00:12:33.667 sys 0m0.736s 00:12:33.667 00:55:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:33.667 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:33.667 00:55:08 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:33.667 00:55:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:33.667 00:55:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.667 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:33.667 ************************************ 00:12:33.667 START TEST version 00:12:33.667 ************************************ 00:12:33.667 00:55:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:33.926 * Looking for test storage... 00:12:33.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:33.926 00:55:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:33.926 00:55:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:33.926 00:55:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:33.926 00:55:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:33.926 00:55:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:33.926 00:55:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:33.926 00:55:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:33.926 00:55:08 -- scripts/common.sh@335 -- # IFS=.-: 00:12:33.926 00:55:08 -- scripts/common.sh@335 -- # read -ra ver1 00:12:33.926 00:55:08 -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.926 00:55:08 -- scripts/common.sh@336 -- # read -ra ver2 00:12:33.926 00:55:08 -- scripts/common.sh@337 -- # local 'op=<' 00:12:33.926 00:55:08 -- scripts/common.sh@339 -- # ver1_l=2 00:12:33.926 00:55:08 -- scripts/common.sh@340 -- # ver2_l=1 00:12:33.926 00:55:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:33.926 00:55:08 -- scripts/common.sh@343 -- # case "$op" in 00:12:33.926 00:55:08 -- scripts/common.sh@344 -- # : 1 00:12:33.926 00:55:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:33.926 00:55:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.926 00:55:08 -- scripts/common.sh@364 -- # decimal 1 00:12:33.926 00:55:08 -- scripts/common.sh@352 -- # local d=1 00:12:33.926 00:55:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.926 00:55:08 -- scripts/common.sh@354 -- # echo 1 00:12:33.926 00:55:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:33.926 00:55:08 -- scripts/common.sh@365 -- # decimal 2 00:12:33.926 00:55:08 -- scripts/common.sh@352 -- # local d=2 00:12:33.926 00:55:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.926 00:55:08 -- scripts/common.sh@354 -- # echo 2 00:12:33.926 00:55:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:33.926 00:55:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:33.926 00:55:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:33.926 00:55:08 -- scripts/common.sh@367 -- # return 0 00:12:33.926 00:55:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.926 00:55:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.927 --rc genhtml_branch_coverage=1 00:12:33.927 --rc genhtml_function_coverage=1 00:12:33.927 --rc genhtml_legend=1 00:12:33.927 --rc geninfo_all_blocks=1 00:12:33.927 --rc geninfo_unexecuted_blocks=1 00:12:33.927 00:12:33.927 ' 00:12:33.927 00:55:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.927 --rc genhtml_branch_coverage=1 00:12:33.927 --rc genhtml_function_coverage=1 00:12:33.927 --rc genhtml_legend=1 00:12:33.927 --rc geninfo_all_blocks=1 00:12:33.927 --rc geninfo_unexecuted_blocks=1 00:12:33.927 00:12:33.927 ' 00:12:33.927 00:55:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.927 --rc genhtml_branch_coverage=1 00:12:33.927 --rc genhtml_function_coverage=1 00:12:33.927 --rc genhtml_legend=1 00:12:33.927 --rc geninfo_all_blocks=1 00:12:33.927 --rc geninfo_unexecuted_blocks=1 00:12:33.927 00:12:33.927 ' 00:12:33.927 00:55:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.927 --rc genhtml_branch_coverage=1 00:12:33.927 --rc genhtml_function_coverage=1 00:12:33.927 --rc genhtml_legend=1 00:12:33.927 --rc geninfo_all_blocks=1 00:12:33.927 --rc geninfo_unexecuted_blocks=1 00:12:33.927 00:12:33.927 ' 00:12:33.927 00:55:08 -- app/version.sh@17 -- # get_header_version major 00:12:33.927 00:55:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:33.927 00:55:08 -- app/version.sh@14 -- # tr -d '"' 00:12:33.927 00:55:08 -- app/version.sh@14 -- # cut -f2 00:12:33.927 00:55:08 -- app/version.sh@17 -- # major=24 00:12:33.927 00:55:08 -- app/version.sh@18 -- # get_header_version minor 00:12:33.927 00:55:08 -- app/version.sh@14 -- # cut -f2 00:12:33.927 00:55:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:33.927 00:55:08 -- app/version.sh@14 -- # tr -d '"' 00:12:33.927 00:55:08 -- app/version.sh@18 -- # minor=1 00:12:33.927 00:55:08 -- app/version.sh@19 -- # get_header_version patch 00:12:33.927 00:55:08 -- app/version.sh@14 -- # cut -f2 00:12:33.927 00:55:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:33.927 00:55:08 -- app/version.sh@14 -- # tr -d '"' 00:12:33.927 00:55:08 -- app/version.sh@19 -- # patch=1 00:12:33.927 00:55:08 -- app/version.sh@20 -- # get_header_version suffix 00:12:33.927 00:55:08 -- app/version.sh@14 -- # cut -f2 00:12:33.927 00:55:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:33.927 00:55:08 -- app/version.sh@14 -- # tr -d '"' 00:12:33.927 00:55:08 -- app/version.sh@20 -- # suffix=-pre 00:12:33.927 00:55:08 -- app/version.sh@22 -- # version=24.1 00:12:33.927 00:55:08 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:33.927 00:55:08 -- app/version.sh@25 -- # version=24.1.1 00:12:33.927 00:55:08 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:33.927 00:55:08 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:33.927 00:55:08 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:34.186 00:55:08 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:34.186 00:55:08 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:34.186 ************************************ 00:12:34.186 END TEST version 00:12:34.186 ************************************ 00:12:34.186 00:12:34.186 real 0m0.280s 00:12:34.186 user 0m0.168s 00:12:34.186 sys 0m0.169s 00:12:34.186 00:55:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:34.186 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:34.186 00:55:08 -- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']' 00:12:34.186 00:55:08 -- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:34.186 00:55:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:34.186 00:55:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.187 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:34.187 ************************************ 00:12:34.187 START TEST blockdev_general 00:12:34.187 ************************************ 00:12:34.187 00:55:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:34.187 * Looking for test storage... 00:12:34.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:34.187 00:55:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:34.187 00:55:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:34.187 00:55:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:34.446 00:55:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:34.446 00:55:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:34.446 00:55:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:34.446 00:55:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:34.446 00:55:08 -- scripts/common.sh@335 -- # IFS=.-: 00:12:34.446 00:55:08 -- scripts/common.sh@335 -- # read -ra ver1 00:12:34.446 00:55:08 -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.446 00:55:08 -- scripts/common.sh@336 -- # read -ra ver2 00:12:34.446 00:55:08 -- scripts/common.sh@337 -- # local 'op=<' 00:12:34.446 00:55:08 -- scripts/common.sh@339 -- # ver1_l=2 00:12:34.446 00:55:08 -- scripts/common.sh@340 -- # ver2_l=1 00:12:34.446 00:55:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:34.446 00:55:08 -- scripts/common.sh@343 -- # case "$op" in 00:12:34.446 00:55:08 -- scripts/common.sh@344 -- # : 1 00:12:34.446 00:55:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:34.446 00:55:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.446 00:55:08 -- scripts/common.sh@364 -- # decimal 1 00:12:34.446 00:55:08 -- scripts/common.sh@352 -- # local d=1 00:12:34.446 00:55:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.446 00:55:08 -- scripts/common.sh@354 -- # echo 1 00:12:34.446 00:55:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:34.446 00:55:08 -- scripts/common.sh@365 -- # decimal 2 00:12:34.446 00:55:08 -- scripts/common.sh@352 -- # local d=2 00:12:34.446 00:55:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.446 00:55:08 -- scripts/common.sh@354 -- # echo 2 00:12:34.446 00:55:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:34.446 00:55:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:34.446 00:55:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:34.446 00:55:08 -- scripts/common.sh@367 -- # return 0 00:12:34.446 00:55:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.446 00:55:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:34.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.447 --rc genhtml_branch_coverage=1 00:12:34.447 --rc genhtml_function_coverage=1 00:12:34.447 --rc genhtml_legend=1 00:12:34.447 --rc geninfo_all_blocks=1 00:12:34.447 --rc geninfo_unexecuted_blocks=1 00:12:34.447 00:12:34.447 ' 00:12:34.447 00:55:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.447 --rc genhtml_branch_coverage=1 00:12:34.447 --rc genhtml_function_coverage=1 00:12:34.447 --rc genhtml_legend=1 00:12:34.447 --rc geninfo_all_blocks=1 00:12:34.447 --rc geninfo_unexecuted_blocks=1 00:12:34.447 00:12:34.447 ' 00:12:34.447 00:55:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.447 --rc genhtml_branch_coverage=1 00:12:34.447 --rc genhtml_function_coverage=1 00:12:34.447 --rc genhtml_legend=1 00:12:34.447 --rc geninfo_all_blocks=1 00:12:34.447 --rc geninfo_unexecuted_blocks=1 00:12:34.447 00:12:34.447 ' 00:12:34.447 00:55:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.447 --rc genhtml_branch_coverage=1 00:12:34.447 --rc genhtml_function_coverage=1 00:12:34.447 --rc genhtml_legend=1 00:12:34.447 --rc geninfo_all_blocks=1 00:12:34.447 --rc geninfo_unexecuted_blocks=1 00:12:34.447 00:12:34.447 ' 00:12:34.447 00:55:08 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:34.447 00:55:08 -- bdev/nbd_common.sh@6 -- # set -e 00:12:34.447 00:55:08 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:34.447 00:55:08 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:34.447 00:55:08 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:34.447 00:55:08 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:34.447 00:55:08 -- bdev/blockdev.sh@18 -- # : 00:12:34.447 00:55:08 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:34.447 00:55:08 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:34.447 00:55:08 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:34.447 00:55:08 -- bdev/blockdev.sh@672 -- # uname -s 00:12:34.447 00:55:08 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:34.447 00:55:08 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:34.447 00:55:08 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:34.447 00:55:08 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:34.447 00:55:08 -- bdev/blockdev.sh@682 -- # dek= 00:12:34.447 00:55:08 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:34.447 00:55:08 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:34.447 00:55:08 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:34.447 00:55:08 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:34.447 00:55:08 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:34.447 00:55:08 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:34.447 00:55:08 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=119794 00:12:34.447 00:55:08 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:34.447 00:55:08 -- bdev/blockdev.sh@47 -- # waitforlisten 119794 00:12:34.447 00:55:08 -- common/autotest_common.sh@829 -- # '[' -z 119794 ']' 00:12:34.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.447 00:55:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.447 00:55:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.447 00:55:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.447 00:55:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.447 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:34.447 00:55:08 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:34.447 [2024-11-18 00:55:08.713486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:34.447 [2024-11-18 00:55:08.713778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119794 ] 00:12:34.706 [2024-11-18 00:55:08.866209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.706 [2024-11-18 00:55:08.938306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:34.706 [2024-11-18 00:55:08.938537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.272 00:55:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.272 00:55:09 -- common/autotest_common.sh@862 -- # return 0 00:12:35.272 00:55:09 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:35.272 00:55:09 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:35.272 00:55:09 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:35.272 00:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.272 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:12:35.840 [2024-11-18 00:55:09.997930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:35.840 [2024-11-18 00:55:09.998028] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:35.840 00:12:35.840 [2024-11-18 00:55:10.005895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:35.840 [2024-11-18 00:55:10.005961] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:35.840 00:12:35.840 Malloc0 00:12:35.840 Malloc1 00:12:35.840 Malloc2 00:12:35.840 Malloc3 00:12:35.840 Malloc4 00:12:35.840 Malloc5 00:12:35.840 Malloc6 00:12:35.840 Malloc7 00:12:35.840 Malloc8 00:12:35.840 Malloc9 00:12:35.840 [2024-11-18 00:55:10.231352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:35.840 [2024-11-18 00:55:10.231486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.840 [2024-11-18 00:55:10.231542] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:35.840 [2024-11-18 00:55:10.231570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.840 [2024-11-18 00:55:10.234416] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.840 [2024-11-18 00:55:10.234477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:35.840 TestPT 00:12:36.100 00:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.100 00:55:10 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:36.100 5000+0 records in 00:12:36.100 5000+0 records out 00:12:36.100 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0363323 s, 282 MB/s 00:12:36.100 00:55:10 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:36.100 00:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.100 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:36.100 AIO0 00:12:36.100 00:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.100 00:55:10 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:36.100 00:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.100 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:36.100 00:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.100 00:55:10 -- bdev/blockdev.sh@738 -- # cat 00:12:36.100 00:55:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:36.100 00:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.100 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:36.100 00:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.100 00:55:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:36.100 00:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.100 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:36.100 00:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.100 00:55:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:36.100 00:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.100 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:36.100 00:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.100 00:55:10 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:36.100 00:55:10 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:36.100 00:55:10 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:36.100 00:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.100 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:36.360 00:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.360 00:55:10 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:36.360 00:55:10 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:36.362 00:55:10 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2f552dcc-3674-4ddc-bdc3-57ac3f42d480"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2f552dcc-3674-4ddc-bdc3-57ac3f42d480",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "04690189-c5c6-5825-8c80-ee8b2c8f4eb0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "04690189-c5c6-5825-8c80-ee8b2c8f4eb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6329a1bc-e850-5f8f-8e68-26f2940c72e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6329a1bc-e850-5f8f-8e68-26f2940c72e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "da429998-f024-564a-b23e-207b91669d72"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "da429998-f024-564a-b23e-207b91669d72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c4f8ce9b-7b73-5157-85f2-56ba96ee1c2c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4f8ce9b-7b73-5157-85f2-56ba96ee1c2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f201412f-8107-55aa-a8e0-6ab94db7ddb7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f201412f-8107-55aa-a8e0-6ab94db7ddb7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6ebd439c-c4c4-50da-915a-d931696c4ae4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6ebd439c-c4c4-50da-915a-d931696c4ae4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c66b342b-1498-59a9-a2ae-58330ee05455"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c66b342b-1498-59a9-a2ae-58330ee05455",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "07823cc9-566b-53dd-bf87-14ff25292a0f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07823cc9-566b-53dd-bf87-14ff25292a0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "56ff126d-0df4-5cc0-a8e5-2ebaa10b11db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56ff126d-0df4-5cc0-a8e5-2ebaa10b11db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "15d03432-f62f-5a56-97d9-eb590cf337e0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "15d03432-f62f-5a56-97d9-eb590cf337e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "40bee1b1-8787-50e6-a51e-ccbd59417d2d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "40bee1b1-8787-50e6-a51e-ccbd59417d2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "45af5eda-1584-4764-9389-92e20a57f1fe"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "45af5eda-1584-4764-9389-92e20a57f1fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "45af5eda-1584-4764-9389-92e20a57f1fe",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "23880358-058b-42fa-b12f-6eb093dabf8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8c477bed-7a78-4dd6-98a9-7a040b561fdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "eddabf7d-5abe-4238-ac79-4b82a7646535"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "eddabf7d-5abe-4238-ac79-4b82a7646535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "eddabf7d-5abe-4238-ac79-4b82a7646535",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c9bfca97-3010-473a-a9e3-0be2032c8998",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "0e4ea2fc-313a-4316-bb64-b38d04b2b77e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d4407d14-459a-4957-a811-b0f1e3d8c097"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4407d14-459a-4957-a811-b0f1e3d8c097",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d4407d14-459a-4957-a811-b0f1e3d8c097",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "62f657c2-89d2-4894-9364-11ea6dc46397",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "dfd71065-cfeb-441a-98a9-12a3d91b547d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "3d53e94c-5cbf-49b5-bd2b-ad50c2cc619a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "3d53e94c-5cbf-49b5-bd2b-ad50c2cc619a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:36.362 00:55:10 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:36.362 00:55:10 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:36.362 00:55:10 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:36.362 00:55:10 -- bdev/blockdev.sh@752 -- # killprocess 119794 00:12:36.362 00:55:10 -- common/autotest_common.sh@936 -- # '[' -z 119794 ']' 00:12:36.362 00:55:10 -- common/autotest_common.sh@940 -- # kill -0 119794 00:12:36.362 00:55:10 -- common/autotest_common.sh@941 -- # uname 00:12:36.362 00:55:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.362 00:55:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119794 00:12:36.362 00:55:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:36.362 00:55:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:36.362 00:55:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119794' 00:12:36.362 killing process with pid 119794 00:12:36.362 00:55:10 -- common/autotest_common.sh@955 -- # kill 119794 00:12:36.362 00:55:10 -- common/autotest_common.sh@960 -- # wait 119794 00:12:37.302 00:55:11 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:37.302 00:55:11 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:37.302 00:55:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:37.302 00:55:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.302 00:55:11 -- common/autotest_common.sh@10 -- # set +x 00:12:37.302 ************************************ 00:12:37.302 START TEST bdev_hello_world 00:12:37.302 ************************************ 00:12:37.302 00:55:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:37.302 [2024-11-18 00:55:11.582983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:37.302 [2024-11-18 00:55:11.583258] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119860 ] 00:12:37.570 [2024-11-18 00:55:11.740073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.570 [2024-11-18 00:55:11.809771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.844 [2024-11-18 00:55:11.987506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:37.844 [2024-11-18 00:55:11.987615] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:37.844 [2024-11-18 00:55:11.995415] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:37.844 [2024-11-18 00:55:11.995482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:37.844 [2024-11-18 00:55:12.003457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:37.844 [2024-11-18 00:55:12.003519] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:37.844 [2024-11-18 00:55:12.003570] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:37.844 [2024-11-18 00:55:12.117444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:37.844 [2024-11-18 00:55:12.117548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.844 [2024-11-18 00:55:12.117621] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:37.844 [2024-11-18 00:55:12.117659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.844 [2024-11-18 00:55:12.120505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.844 [2024-11-18 00:55:12.120570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:38.155 [2024-11-18 00:55:12.311463] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:38.155 [2024-11-18 00:55:12.311557] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:38.155 [2024-11-18 00:55:12.311671] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:38.155 [2024-11-18 00:55:12.311754] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:38.155 [2024-11-18 00:55:12.311864] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:38.155 [2024-11-18 00:55:12.311905] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:38.155 [2024-11-18 00:55:12.311967] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:38.155 00:12:38.155 [2024-11-18 00:55:12.312035] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:38.724 00:12:38.724 real 0m1.411s 00:12:38.724 user 0m0.836s 00:12:38.724 sys 0m0.431s 00:12:38.724 00:55:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:38.724 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:12:38.724 ************************************ 00:12:38.724 END TEST bdev_hello_world 00:12:38.724 ************************************ 00:12:38.724 00:55:12 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:38.724 00:55:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:38.724 00:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:38.724 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:12:38.724 ************************************ 00:12:38.724 START TEST bdev_bounds 00:12:38.724 ************************************ 00:12:38.724 00:55:12 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:38.724 00:55:12 -- bdev/blockdev.sh@288 -- # bdevio_pid=119898 00:12:38.724 00:55:12 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:38.724 Process bdevio pid: 119898 00:12:38.724 00:55:12 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 119898' 00:12:38.724 00:55:12 -- bdev/blockdev.sh@291 -- # waitforlisten 119898 00:12:38.724 00:55:12 -- common/autotest_common.sh@829 -- # '[' -z 119898 ']' 00:12:38.724 00:55:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.724 00:55:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.724 00:55:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.724 00:55:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.724 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:12:38.724 00:55:12 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:38.724 [2024-11-18 00:55:13.062314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:38.724 [2024-11-18 00:55:13.062771] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119898 ] 00:12:38.983 [2024-11-18 00:55:13.235616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.983 [2024-11-18 00:55:13.313120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.983 [2024-11-18 00:55:13.313319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.983 [2024-11-18 00:55:13.313322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.242 [2024-11-18 00:55:13.496584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:39.242 [2024-11-18 00:55:13.496711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:39.242 [2024-11-18 00:55:13.504477] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:39.242 [2024-11-18 00:55:13.504559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:39.242 [2024-11-18 00:55:13.512553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:39.242 [2024-11-18 00:55:13.512639] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:39.242 [2024-11-18 00:55:13.512682] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:39.242 [2024-11-18 00:55:13.632949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:39.242 [2024-11-18 00:55:13.633087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.242 [2024-11-18 00:55:13.633176] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:39.242 [2024-11-18 00:55:13.633204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.242 [2024-11-18 00:55:13.636239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.242 [2024-11-18 00:55:13.636300] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:39.811 00:55:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.811 00:55:13 -- common/autotest_common.sh@862 -- # return 0 00:12:39.811 00:55:13 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:39.811 I/O targets: 00:12:39.811 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:39.811 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:39.811 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:39.811 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:39.811 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:39.811 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:39.811 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:39.811 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:39.811 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:39.811 00:12:39.811 00:12:39.811 CUnit - A unit testing framework for C - Version 2.1-3 00:12:39.811 http://cunit.sourceforge.net/ 00:12:39.811 00:12:39.811 00:12:39.811 Suite: bdevio tests on: AIO0 00:12:39.811 Test: blockdev write read block ...passed 00:12:39.811 Test: blockdev write zeroes read block ...passed 00:12:39.811 Test: blockdev write zeroes read no split ...passed 00:12:39.811 Test: blockdev write zeroes read split ...passed 00:12:39.811 Test: blockdev write zeroes read split partial ...passed 00:12:39.811 Test: blockdev reset ...passed 00:12:39.811 Test: blockdev write read 8 blocks ...passed 00:12:39.811 Test: blockdev write read size > 128k ...passed 00:12:39.811 Test: blockdev write read invalid size ...passed 00:12:39.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.811 Test: blockdev write read max offset ...passed 00:12:39.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.811 Test: blockdev writev readv 8 blocks ...passed 00:12:39.811 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.811 Test: blockdev writev readv block ...passed 00:12:39.811 Test: blockdev writev readv size > 128k ...passed 00:12:39.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.811 Test: blockdev comparev and writev ...passed 00:12:39.811 Test: blockdev nvme passthru rw ...passed 00:12:39.811 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.811 Test: blockdev nvme admin passthru ...passed 00:12:39.811 Test: blockdev copy ...passed 00:12:39.811 Suite: bdevio tests on: raid1 00:12:39.811 Test: blockdev write read block ...passed 00:12:39.811 Test: blockdev write zeroes read block ...passed 00:12:39.811 Test: blockdev write zeroes read no split ...passed 00:12:39.811 Test: blockdev write zeroes read split ...passed 00:12:39.811 Test: blockdev write zeroes read split partial ...passed 00:12:39.811 Test: blockdev reset ...passed 00:12:39.811 Test: blockdev write read 8 blocks ...passed 00:12:39.811 Test: blockdev write read size > 128k ...passed 00:12:39.811 Test: blockdev write read invalid size ...passed 00:12:39.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.811 Test: blockdev write read max offset ...passed 00:12:39.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.811 Test: blockdev writev readv 8 blocks ...passed 00:12:39.811 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.811 Test: blockdev writev readv block ...passed 00:12:39.811 Test: blockdev writev readv size > 128k ...passed 00:12:39.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.811 Test: blockdev comparev and writev ...passed 00:12:39.811 Test: blockdev nvme passthru rw ...passed 00:12:39.811 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.811 Test: blockdev nvme admin passthru ...passed 00:12:39.811 Test: blockdev copy ...passed 00:12:39.811 Suite: bdevio tests on: concat0 00:12:39.811 Test: blockdev write read block ...passed 00:12:39.811 Test: blockdev write zeroes read block ...passed 00:12:39.811 Test: blockdev write zeroes read no split ...passed 00:12:39.811 Test: blockdev write zeroes read split ...passed 00:12:39.811 Test: blockdev write zeroes read split partial ...passed 00:12:39.811 Test: blockdev reset ...passed 00:12:39.811 Test: blockdev write read 8 blocks ...passed 00:12:39.811 Test: blockdev write read size > 128k ...passed 00:12:39.811 Test: blockdev write read invalid size ...passed 00:12:39.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.811 Test: blockdev write read max offset ...passed 00:12:39.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.811 Test: blockdev writev readv 8 blocks ...passed 00:12:39.811 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.811 Test: blockdev writev readv block ...passed 00:12:39.811 Test: blockdev writev readv size > 128k ...passed 00:12:39.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.811 Test: blockdev comparev and writev ...passed 00:12:39.811 Test: blockdev nvme passthru rw ...passed 00:12:39.811 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.811 Test: blockdev nvme admin passthru ...passed 00:12:39.811 Test: blockdev copy ...passed 00:12:39.811 Suite: bdevio tests on: raid0 00:12:39.811 Test: blockdev write read block ...passed 00:12:39.811 Test: blockdev write zeroes read block ...passed 00:12:39.811 Test: blockdev write zeroes read no split ...passed 00:12:39.811 Test: blockdev write zeroes read split ...passed 00:12:39.811 Test: blockdev write zeroes read split partial ...passed 00:12:39.811 Test: blockdev reset ...passed 00:12:39.811 Test: blockdev write read 8 blocks ...passed 00:12:39.811 Test: blockdev write read size > 128k ...passed 00:12:39.811 Test: blockdev write read invalid size ...passed 00:12:39.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.811 Test: blockdev write read max offset ...passed 00:12:39.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.811 Test: blockdev writev readv 8 blocks ...passed 00:12:39.811 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.811 Test: blockdev writev readv block ...passed 00:12:39.811 Test: blockdev writev readv size > 128k ...passed 00:12:39.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.811 Test: blockdev comparev and writev ...passed 00:12:39.811 Test: blockdev nvme passthru rw ...passed 00:12:39.811 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.811 Test: blockdev nvme admin passthru ...passed 00:12:39.811 Test: blockdev copy ...passed 00:12:39.811 Suite: bdevio tests on: TestPT 00:12:39.811 Test: blockdev write read block ...passed 00:12:39.811 Test: blockdev write zeroes read block ...passed 00:12:39.811 Test: blockdev write zeroes read no split ...passed 00:12:39.811 Test: blockdev write zeroes read split ...passed 00:12:39.811 Test: blockdev write zeroes read split partial ...passed 00:12:39.811 Test: blockdev reset ...passed 00:12:39.811 Test: blockdev write read 8 blocks ...passed 00:12:39.811 Test: blockdev write read size > 128k ...passed 00:12:39.811 Test: blockdev write read invalid size ...passed 00:12:39.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.811 Test: blockdev write read max offset ...passed 00:12:39.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.811 Test: blockdev writev readv 8 blocks ...passed 00:12:39.811 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.811 Test: blockdev writev readv block ...passed 00:12:39.811 Test: blockdev writev readv size > 128k ...passed 00:12:39.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.811 Test: blockdev comparev and writev ...passed 00:12:39.811 Test: blockdev nvme passthru rw ...passed 00:12:39.811 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.811 Test: blockdev nvme admin passthru ...passed 00:12:39.811 Test: blockdev copy ...passed 00:12:39.811 Suite: bdevio tests on: Malloc2p7 00:12:39.811 Test: blockdev write read block ...passed 00:12:39.811 Test: blockdev write zeroes read block ...passed 00:12:39.812 Test: blockdev write zeroes read no split ...passed 00:12:39.812 Test: blockdev write zeroes read split ...passed 00:12:39.812 Test: blockdev write zeroes read split partial ...passed 00:12:39.812 Test: blockdev reset ...passed 00:12:39.812 Test: blockdev write read 8 blocks ...passed 00:12:39.812 Test: blockdev write read size > 128k ...passed 00:12:39.812 Test: blockdev write read invalid size ...passed 00:12:39.812 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.812 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.812 Test: blockdev write read max offset ...passed 00:12:39.812 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.812 Test: blockdev writev readv 8 blocks ...passed 00:12:39.812 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.812 Test: blockdev writev readv block ...passed 00:12:39.812 Test: blockdev writev readv size > 128k ...passed 00:12:39.812 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.812 Test: blockdev comparev and writev ...passed 00:12:39.812 Test: blockdev nvme passthru rw ...passed 00:12:39.812 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.812 Test: blockdev nvme admin passthru ...passed 00:12:39.812 Test: blockdev copy ...passed 00:12:39.812 Suite: bdevio tests on: Malloc2p6 00:12:39.812 Test: blockdev write read block ...passed 00:12:39.812 Test: blockdev write zeroes read block ...passed 00:12:39.812 Test: blockdev write zeroes read no split ...passed 00:12:39.812 Test: blockdev write zeroes read split ...passed 00:12:40.072 Test: blockdev write zeroes read split partial ...passed 00:12:40.072 Test: blockdev reset ...passed 00:12:40.072 Test: blockdev write read 8 blocks ...passed 00:12:40.072 Test: blockdev write read size > 128k ...passed 00:12:40.072 Test: blockdev write read invalid size ...passed 00:12:40.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.072 Test: blockdev write read max offset ...passed 00:12:40.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.072 Test: blockdev writev readv 8 blocks ...passed 00:12:40.072 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.072 Test: blockdev writev readv block ...passed 00:12:40.072 Test: blockdev writev readv size > 128k ...passed 00:12:40.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.072 Test: blockdev comparev and writev ...passed 00:12:40.072 Test: blockdev nvme passthru rw ...passed 00:12:40.072 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.072 Test: blockdev nvme admin passthru ...passed 00:12:40.072 Test: blockdev copy ...passed 00:12:40.072 Suite: bdevio tests on: Malloc2p5 00:12:40.072 Test: blockdev write read block ...passed 00:12:40.072 Test: blockdev write zeroes read block ...passed 00:12:40.072 Test: blockdev write zeroes read no split ...passed 00:12:40.072 Test: blockdev write zeroes read split ...passed 00:12:40.072 Test: blockdev write zeroes read split partial ...passed 00:12:40.072 Test: blockdev reset ...passed 00:12:40.072 Test: blockdev write read 8 blocks ...passed 00:12:40.072 Test: blockdev write read size > 128k ...passed 00:12:40.072 Test: blockdev write read invalid size ...passed 00:12:40.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.072 Test: blockdev write read max offset ...passed 00:12:40.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.072 Test: blockdev writev readv 8 blocks ...passed 00:12:40.072 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.072 Test: blockdev writev readv block ...passed 00:12:40.072 Test: blockdev writev readv size > 128k ...passed 00:12:40.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.072 Test: blockdev comparev and writev ...passed 00:12:40.072 Test: blockdev nvme passthru rw ...passed 00:12:40.072 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.072 Test: blockdev nvme admin passthru ...passed 00:12:40.072 Test: blockdev copy ...passed 00:12:40.072 Suite: bdevio tests on: Malloc2p4 00:12:40.072 Test: blockdev write read block ...passed 00:12:40.072 Test: blockdev write zeroes read block ...passed 00:12:40.072 Test: blockdev write zeroes read no split ...passed 00:12:40.072 Test: blockdev write zeroes read split ...passed 00:12:40.072 Test: blockdev write zeroes read split partial ...passed 00:12:40.072 Test: blockdev reset ...passed 00:12:40.072 Test: blockdev write read 8 blocks ...passed 00:12:40.072 Test: blockdev write read size > 128k ...passed 00:12:40.072 Test: blockdev write read invalid size ...passed 00:12:40.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.072 Test: blockdev write read max offset ...passed 00:12:40.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.072 Test: blockdev writev readv 8 blocks ...passed 00:12:40.072 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.072 Test: blockdev writev readv block ...passed 00:12:40.072 Test: blockdev writev readv size > 128k ...passed 00:12:40.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.072 Test: blockdev comparev and writev ...passed 00:12:40.072 Test: blockdev nvme passthru rw ...passed 00:12:40.072 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.072 Test: blockdev nvme admin passthru ...passed 00:12:40.072 Test: blockdev copy ...passed 00:12:40.072 Suite: bdevio tests on: Malloc2p3 00:12:40.072 Test: blockdev write read block ...passed 00:12:40.072 Test: blockdev write zeroes read block ...passed 00:12:40.072 Test: blockdev write zeroes read no split ...passed 00:12:40.072 Test: blockdev write zeroes read split ...passed 00:12:40.072 Test: blockdev write zeroes read split partial ...passed 00:12:40.072 Test: blockdev reset ...passed 00:12:40.072 Test: blockdev write read 8 blocks ...passed 00:12:40.072 Test: blockdev write read size > 128k ...passed 00:12:40.072 Test: blockdev write read invalid size ...passed 00:12:40.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.072 Test: blockdev write read max offset ...passed 00:12:40.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.072 Test: blockdev writev readv 8 blocks ...passed 00:12:40.072 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.072 Test: blockdev writev readv block ...passed 00:12:40.072 Test: blockdev writev readv size > 128k ...passed 00:12:40.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.072 Test: blockdev comparev and writev ...passed 00:12:40.072 Test: blockdev nvme passthru rw ...passed 00:12:40.072 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.073 Test: blockdev nvme admin passthru ...passed 00:12:40.073 Test: blockdev copy ...passed 00:12:40.073 Suite: bdevio tests on: Malloc2p2 00:12:40.073 Test: blockdev write read block ...passed 00:12:40.073 Test: blockdev write zeroes read block ...passed 00:12:40.073 Test: blockdev write zeroes read no split ...passed 00:12:40.073 Test: blockdev write zeroes read split ...passed 00:12:40.073 Test: blockdev write zeroes read split partial ...passed 00:12:40.073 Test: blockdev reset ...passed 00:12:40.073 Test: blockdev write read 8 blocks ...passed 00:12:40.073 Test: blockdev write read size > 128k ...passed 00:12:40.073 Test: blockdev write read invalid size ...passed 00:12:40.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.073 Test: blockdev write read max offset ...passed 00:12:40.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.073 Test: blockdev writev readv 8 blocks ...passed 00:12:40.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.073 Test: blockdev writev readv block ...passed 00:12:40.073 Test: blockdev writev readv size > 128k ...passed 00:12:40.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.073 Test: blockdev comparev and writev ...passed 00:12:40.073 Test: blockdev nvme passthru rw ...passed 00:12:40.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.073 Test: blockdev nvme admin passthru ...passed 00:12:40.073 Test: blockdev copy ...passed 00:12:40.073 Suite: bdevio tests on: Malloc2p1 00:12:40.073 Test: blockdev write read block ...passed 00:12:40.073 Test: blockdev write zeroes read block ...passed 00:12:40.073 Test: blockdev write zeroes read no split ...passed 00:12:40.073 Test: blockdev write zeroes read split ...passed 00:12:40.073 Test: blockdev write zeroes read split partial ...passed 00:12:40.073 Test: blockdev reset ...passed 00:12:40.073 Test: blockdev write read 8 blocks ...passed 00:12:40.073 Test: blockdev write read size > 128k ...passed 00:12:40.073 Test: blockdev write read invalid size ...passed 00:12:40.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.073 Test: blockdev write read max offset ...passed 00:12:40.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.073 Test: blockdev writev readv 8 blocks ...passed 00:12:40.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.073 Test: blockdev writev readv block ...passed 00:12:40.073 Test: blockdev writev readv size > 128k ...passed 00:12:40.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.073 Test: blockdev comparev and writev ...passed 00:12:40.073 Test: blockdev nvme passthru rw ...passed 00:12:40.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.073 Test: blockdev nvme admin passthru ...passed 00:12:40.073 Test: blockdev copy ...passed 00:12:40.073 Suite: bdevio tests on: Malloc2p0 00:12:40.073 Test: blockdev write read block ...passed 00:12:40.073 Test: blockdev write zeroes read block ...passed 00:12:40.073 Test: blockdev write zeroes read no split ...passed 00:12:40.073 Test: blockdev write zeroes read split ...passed 00:12:40.073 Test: blockdev write zeroes read split partial ...passed 00:12:40.073 Test: blockdev reset ...passed 00:12:40.073 Test: blockdev write read 8 blocks ...passed 00:12:40.073 Test: blockdev write read size > 128k ...passed 00:12:40.073 Test: blockdev write read invalid size ...passed 00:12:40.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.073 Test: blockdev write read max offset ...passed 00:12:40.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.073 Test: blockdev writev readv 8 blocks ...passed 00:12:40.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.073 Test: blockdev writev readv block ...passed 00:12:40.073 Test: blockdev writev readv size > 128k ...passed 00:12:40.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.073 Test: blockdev comparev and writev ...passed 00:12:40.073 Test: blockdev nvme passthru rw ...passed 00:12:40.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.073 Test: blockdev nvme admin passthru ...passed 00:12:40.073 Test: blockdev copy ...passed 00:12:40.073 Suite: bdevio tests on: Malloc1p1 00:12:40.073 Test: blockdev write read block ...passed 00:12:40.073 Test: blockdev write zeroes read block ...passed 00:12:40.073 Test: blockdev write zeroes read no split ...passed 00:12:40.073 Test: blockdev write zeroes read split ...passed 00:12:40.073 Test: blockdev write zeroes read split partial ...passed 00:12:40.073 Test: blockdev reset ...passed 00:12:40.073 Test: blockdev write read 8 blocks ...passed 00:12:40.073 Test: blockdev write read size > 128k ...passed 00:12:40.073 Test: blockdev write read invalid size ...passed 00:12:40.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.073 Test: blockdev write read max offset ...passed 00:12:40.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.073 Test: blockdev writev readv 8 blocks ...passed 00:12:40.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.073 Test: blockdev writev readv block ...passed 00:12:40.073 Test: blockdev writev readv size > 128k ...passed 00:12:40.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.073 Test: blockdev comparev and writev ...passed 00:12:40.073 Test: blockdev nvme passthru rw ...passed 00:12:40.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.073 Test: blockdev nvme admin passthru ...passed 00:12:40.073 Test: blockdev copy ...passed 00:12:40.073 Suite: bdevio tests on: Malloc1p0 00:12:40.073 Test: blockdev write read block ...passed 00:12:40.073 Test: blockdev write zeroes read block ...passed 00:12:40.073 Test: blockdev write zeroes read no split ...passed 00:12:40.073 Test: blockdev write zeroes read split ...passed 00:12:40.073 Test: blockdev write zeroes read split partial ...passed 00:12:40.073 Test: blockdev reset ...passed 00:12:40.073 Test: blockdev write read 8 blocks ...passed 00:12:40.073 Test: blockdev write read size > 128k ...passed 00:12:40.073 Test: blockdev write read invalid size ...passed 00:12:40.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.073 Test: blockdev write read max offset ...passed 00:12:40.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.073 Test: blockdev writev readv 8 blocks ...passed 00:12:40.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.073 Test: blockdev writev readv block ...passed 00:12:40.073 Test: blockdev writev readv size > 128k ...passed 00:12:40.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.073 Test: blockdev comparev and writev ...passed 00:12:40.073 Test: blockdev nvme passthru rw ...passed 00:12:40.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.073 Test: blockdev nvme admin passthru ...passed 00:12:40.073 Test: blockdev copy ...passed 00:12:40.073 Suite: bdevio tests on: Malloc0 00:12:40.073 Test: blockdev write read block ...passed 00:12:40.073 Test: blockdev write zeroes read block ...passed 00:12:40.073 Test: blockdev write zeroes read no split ...passed 00:12:40.073 Test: blockdev write zeroes read split ...passed 00:12:40.073 Test: blockdev write zeroes read split partial ...passed 00:12:40.073 Test: blockdev reset ...passed 00:12:40.073 Test: blockdev write read 8 blocks ...passed 00:12:40.073 Test: blockdev write read size > 128k ...passed 00:12:40.073 Test: blockdev write read invalid size ...passed 00:12:40.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.073 Test: blockdev write read max offset ...passed 00:12:40.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.073 Test: blockdev writev readv 8 blocks ...passed 00:12:40.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.073 Test: blockdev writev readv block ...passed 00:12:40.073 Test: blockdev writev readv size > 128k ...passed 00:12:40.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.073 Test: blockdev comparev and writev ...passed 00:12:40.073 Test: blockdev nvme passthru rw ...passed 00:12:40.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.073 Test: blockdev nvme admin passthru ...passed 00:12:40.073 Test: blockdev copy ...passed 00:12:40.073 00:12:40.073 Run Summary: Type Total Ran Passed Failed Inactive 00:12:40.073 suites 16 16 n/a 0 0 00:12:40.073 tests 368 368 368 0 0 00:12:40.073 asserts 2224 2224 2224 0 n/a 00:12:40.073 00:12:40.073 Elapsed time = 0.680 seconds 00:12:40.073 0 00:12:40.073 00:55:14 -- bdev/blockdev.sh@293 -- # killprocess 119898 00:12:40.073 00:55:14 -- common/autotest_common.sh@936 -- # '[' -z 119898 ']' 00:12:40.073 00:55:14 -- common/autotest_common.sh@940 -- # kill -0 119898 00:12:40.073 00:55:14 -- common/autotest_common.sh@941 -- # uname 00:12:40.073 00:55:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:40.073 00:55:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119898 00:12:40.073 00:55:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:40.073 killing process with pid 119898 00:12:40.073 00:55:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:40.073 00:55:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119898' 00:12:40.073 00:55:14 -- common/autotest_common.sh@955 -- # kill 119898 00:12:40.073 00:55:14 -- common/autotest_common.sh@960 -- # wait 119898 00:12:40.643 ************************************ 00:12:40.643 END TEST bdev_bounds 00:12:40.643 ************************************ 00:12:40.643 00:55:14 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:40.643 00:12:40.643 real 0m1.978s 00:12:40.643 user 0m4.358s 00:12:40.643 sys 0m0.616s 00:12:40.643 00:55:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:40.643 00:55:14 -- common/autotest_common.sh@10 -- # set +x 00:12:40.643 00:55:15 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:40.643 00:55:15 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:40.643 00:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.643 00:55:15 -- common/autotest_common.sh@10 -- # set +x 00:12:40.643 ************************************ 00:12:40.643 START TEST bdev_nbd 00:12:40.643 ************************************ 00:12:40.643 00:55:15 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:40.643 00:55:15 -- bdev/blockdev.sh@298 -- # uname -s 00:12:40.643 00:55:15 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:40.643 00:55:15 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.643 00:55:15 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:40.643 00:55:15 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:40.643 00:55:15 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:40.643 00:55:15 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:40.643 00:55:15 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:40.643 00:55:15 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:40.643 00:55:15 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:40.643 00:55:15 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:40.643 00:55:15 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:40.643 00:55:15 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:40.643 00:55:15 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:40.643 00:55:15 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:40.643 00:55:15 -- bdev/blockdev.sh@316 -- # nbd_pid=119961 00:12:40.643 00:55:15 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:40.643 00:55:15 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:40.643 00:55:15 -- bdev/blockdev.sh@318 -- # waitforlisten 119961 /var/tmp/spdk-nbd.sock 00:12:40.643 00:55:15 -- common/autotest_common.sh@829 -- # '[' -z 119961 ']' 00:12:40.643 00:55:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:40.643 00:55:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.643 00:55:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:40.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:40.643 00:55:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.643 00:55:15 -- common/autotest_common.sh@10 -- # set +x 00:12:40.903 [2024-11-18 00:55:15.099359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:40.903 [2024-11-18 00:55:15.100229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.903 [2024-11-18 00:55:15.243703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.162 [2024-11-18 00:55:15.314601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.162 [2024-11-18 00:55:15.492220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:41.162 [2024-11-18 00:55:15.492526] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:41.162 [2024-11-18 00:55:15.500152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:41.162 [2024-11-18 00:55:15.500323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:41.162 [2024-11-18 00:55:15.508192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:41.162 [2024-11-18 00:55:15.508351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:41.162 [2024-11-18 00:55:15.508473] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:41.421 [2024-11-18 00:55:15.621678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:41.421 [2024-11-18 00:55:15.622051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.421 [2024-11-18 00:55:15.622197] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:41.421 [2024-11-18 00:55:15.622339] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.421 [2024-11-18 00:55:15.625263] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.421 [2024-11-18 00:55:15.625459] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:41.681 00:55:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.681 00:55:15 -- common/autotest_common.sh@862 -- # return 0 00:12:41.681 00:55:15 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@24 -- # local i 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.681 00:55:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:41.940 00:55:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:41.940 00:55:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:41.940 00:55:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:41.940 00:55:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:41.940 00:55:16 -- common/autotest_common.sh@867 -- # local i 00:12:41.940 00:55:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:41.940 00:55:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:41.940 00:55:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:41.940 00:55:16 -- common/autotest_common.sh@871 -- # break 00:12:41.940 00:55:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:41.940 00:55:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:41.940 00:55:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.940 1+0 records in 00:12:41.940 1+0 records out 00:12:41.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528941 s, 7.7 MB/s 00:12:41.940 00:55:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.940 00:55:16 -- common/autotest_common.sh@884 -- # size=4096 00:12:41.940 00:55:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.940 00:55:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:41.940 00:55:16 -- common/autotest_common.sh@887 -- # return 0 00:12:41.940 00:55:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.940 00:55:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.940 00:55:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:42.200 00:55:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:42.200 00:55:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:42.200 00:55:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:42.200 00:55:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:42.200 00:55:16 -- common/autotest_common.sh@867 -- # local i 00:12:42.200 00:55:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:42.200 00:55:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:42.200 00:55:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:42.200 00:55:16 -- common/autotest_common.sh@871 -- # break 00:12:42.200 00:55:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:42.200 00:55:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:42.200 00:55:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.200 1+0 records in 00:12:42.200 1+0 records out 00:12:42.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500065 s, 8.2 MB/s 00:12:42.200 00:55:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.200 00:55:16 -- common/autotest_common.sh@884 -- # size=4096 00:12:42.200 00:55:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.200 00:55:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:42.200 00:55:16 -- common/autotest_common.sh@887 -- # return 0 00:12:42.200 00:55:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.200 00:55:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.200 00:55:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:42.459 00:55:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:42.459 00:55:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:42.459 00:55:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:42.459 00:55:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:42.459 00:55:16 -- common/autotest_common.sh@867 -- # local i 00:12:42.459 00:55:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:42.459 00:55:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:42.459 00:55:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:42.459 00:55:16 -- common/autotest_common.sh@871 -- # break 00:12:42.459 00:55:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:42.459 00:55:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:42.459 00:55:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.459 1+0 records in 00:12:42.459 1+0 records out 00:12:42.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091816 s, 4.5 MB/s 00:12:42.459 00:55:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.459 00:55:16 -- common/autotest_common.sh@884 -- # size=4096 00:12:42.459 00:55:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.459 00:55:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:42.459 00:55:16 -- common/autotest_common.sh@887 -- # return 0 00:12:42.459 00:55:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.459 00:55:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.459 00:55:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:42.720 00:55:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:42.720 00:55:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:42.720 00:55:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:42.720 00:55:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:42.720 00:55:17 -- common/autotest_common.sh@867 -- # local i 00:12:42.720 00:55:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:42.720 00:55:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:42.720 00:55:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:42.720 00:55:17 -- common/autotest_common.sh@871 -- # break 00:12:42.720 00:55:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:42.720 00:55:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:42.720 00:55:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.720 1+0 records in 00:12:42.720 1+0 records out 00:12:42.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560353 s, 7.3 MB/s 00:12:42.720 00:55:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.720 00:55:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:42.720 00:55:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.720 00:55:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:42.720 00:55:17 -- common/autotest_common.sh@887 -- # return 0 00:12:42.720 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.720 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.720 00:55:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:43.287 00:55:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:43.287 00:55:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:43.287 00:55:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:43.287 00:55:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:43.287 00:55:17 -- common/autotest_common.sh@867 -- # local i 00:12:43.287 00:55:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:43.287 00:55:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:43.287 00:55:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:43.287 00:55:17 -- common/autotest_common.sh@871 -- # break 00:12:43.287 00:55:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:43.287 00:55:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:43.287 00:55:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.287 1+0 records in 00:12:43.287 1+0 records out 00:12:43.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566614 s, 7.2 MB/s 00:12:43.288 00:55:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.288 00:55:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:43.288 00:55:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.288 00:55:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.288 00:55:17 -- common/autotest_common.sh@887 -- # return 0 00:12:43.288 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.288 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.288 00:55:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:43.547 00:55:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:43.547 00:55:17 -- common/autotest_common.sh@867 -- # local i 00:12:43.547 00:55:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:43.547 00:55:17 -- common/autotest_common.sh@871 -- # break 00:12:43.547 00:55:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.547 1+0 records in 00:12:43.547 1+0 records out 00:12:43.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569894 s, 7.2 MB/s 00:12:43.547 00:55:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.547 00:55:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:43.547 00:55:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.547 00:55:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.547 00:55:17 -- common/autotest_common.sh@887 -- # return 0 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:43.547 00:55:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:43.547 00:55:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:43.547 00:55:17 -- common/autotest_common.sh@867 -- # local i 00:12:43.547 00:55:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:43.547 00:55:17 -- common/autotest_common.sh@871 -- # break 00:12:43.547 00:55:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:43.547 00:55:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.547 1+0 records in 00:12:43.547 1+0 records out 00:12:43.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609784 s, 6.7 MB/s 00:12:43.547 00:55:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.806 00:55:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:43.806 00:55:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.806 00:55:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.806 00:55:17 -- common/autotest_common.sh@887 -- # return 0 00:12:43.806 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.806 00:55:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.806 00:55:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:44.065 00:55:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:44.065 00:55:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:44.065 00:55:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:44.065 00:55:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:44.065 00:55:18 -- common/autotest_common.sh@867 -- # local i 00:12:44.065 00:55:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:44.065 00:55:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:44.065 00:55:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:44.065 00:55:18 -- common/autotest_common.sh@871 -- # break 00:12:44.065 00:55:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:44.065 00:55:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:44.065 00:55:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.065 1+0 records in 00:12:44.065 1+0 records out 00:12:44.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604384 s, 6.8 MB/s 00:12:44.065 00:55:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.065 00:55:18 -- common/autotest_common.sh@884 -- # size=4096 00:12:44.065 00:55:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.065 00:55:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:44.065 00:55:18 -- common/autotest_common.sh@887 -- # return 0 00:12:44.065 00:55:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.065 00:55:18 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.065 00:55:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:44.324 00:55:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:44.324 00:55:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:44.324 00:55:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:44.324 00:55:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:44.324 00:55:18 -- common/autotest_common.sh@867 -- # local i 00:12:44.324 00:55:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:44.324 00:55:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:44.324 00:55:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:44.324 00:55:18 -- common/autotest_common.sh@871 -- # break 00:12:44.324 00:55:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:44.324 00:55:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:44.324 00:55:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.324 1+0 records in 00:12:44.324 1+0 records out 00:12:44.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731082 s, 5.6 MB/s 00:12:44.324 00:55:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.324 00:55:18 -- common/autotest_common.sh@884 -- # size=4096 00:12:44.324 00:55:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.324 00:55:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:44.324 00:55:18 -- common/autotest_common.sh@887 -- # return 0 00:12:44.324 00:55:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.324 00:55:18 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.324 00:55:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:44.583 00:55:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:44.583 00:55:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:44.583 00:55:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:44.583 00:55:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:44.583 00:55:18 -- common/autotest_common.sh@867 -- # local i 00:12:44.583 00:55:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:44.583 00:55:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:44.583 00:55:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:44.583 00:55:18 -- common/autotest_common.sh@871 -- # break 00:12:44.583 00:55:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:44.583 00:55:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:44.583 00:55:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.583 1+0 records in 00:12:44.583 1+0 records out 00:12:44.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597924 s, 6.9 MB/s 00:12:44.583 00:55:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.583 00:55:18 -- common/autotest_common.sh@884 -- # size=4096 00:12:44.583 00:55:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.583 00:55:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:44.583 00:55:18 -- common/autotest_common.sh@887 -- # return 0 00:12:44.583 00:55:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.583 00:55:18 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.583 00:55:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:44.843 00:55:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:44.843 00:55:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:44.843 00:55:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:44.843 00:55:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:44.843 00:55:19 -- common/autotest_common.sh@867 -- # local i 00:12:44.843 00:55:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:44.843 00:55:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:44.843 00:55:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:44.843 00:55:19 -- common/autotest_common.sh@871 -- # break 00:12:44.843 00:55:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:44.843 00:55:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:44.843 00:55:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.843 1+0 records in 00:12:44.843 1+0 records out 00:12:44.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660383 s, 6.2 MB/s 00:12:44.843 00:55:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.843 00:55:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:44.843 00:55:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.843 00:55:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:44.843 00:55:19 -- common/autotest_common.sh@887 -- # return 0 00:12:44.843 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.843 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.843 00:55:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:45.101 00:55:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:45.101 00:55:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:45.101 00:55:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:45.101 00:55:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:45.101 00:55:19 -- common/autotest_common.sh@867 -- # local i 00:12:45.101 00:55:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.101 00:55:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.101 00:55:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:45.101 00:55:19 -- common/autotest_common.sh@871 -- # break 00:12:45.101 00:55:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.101 00:55:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.101 00:55:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.101 1+0 records in 00:12:45.101 1+0 records out 00:12:45.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105846 s, 3.9 MB/s 00:12:45.101 00:55:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.101 00:55:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.101 00:55:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.101 00:55:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.101 00:55:19 -- common/autotest_common.sh@887 -- # return 0 00:12:45.101 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.101 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.101 00:55:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:45.360 00:55:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:45.360 00:55:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:45.360 00:55:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:45.360 00:55:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:45.360 00:55:19 -- common/autotest_common.sh@867 -- # local i 00:12:45.360 00:55:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.360 00:55:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.360 00:55:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:45.360 00:55:19 -- common/autotest_common.sh@871 -- # break 00:12:45.360 00:55:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.360 00:55:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.360 00:55:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.360 1+0 records in 00:12:45.360 1+0 records out 00:12:45.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940778 s, 4.4 MB/s 00:12:45.360 00:55:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.360 00:55:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.360 00:55:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.360 00:55:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.360 00:55:19 -- common/autotest_common.sh@887 -- # return 0 00:12:45.360 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.360 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.360 00:55:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:45.619 00:55:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:45.619 00:55:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:45.619 00:55:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:45.619 00:55:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:45.619 00:55:19 -- common/autotest_common.sh@867 -- # local i 00:12:45.619 00:55:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.619 00:55:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.619 00:55:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:45.619 00:55:19 -- common/autotest_common.sh@871 -- # break 00:12:45.619 00:55:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.619 00:55:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.619 00:55:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.619 1+0 records in 00:12:45.619 1+0 records out 00:12:45.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00186348 s, 2.2 MB/s 00:12:45.619 00:55:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.619 00:55:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.619 00:55:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.619 00:55:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.619 00:55:19 -- common/autotest_common.sh@887 -- # return 0 00:12:45.619 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.619 00:55:19 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.619 00:55:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:45.878 00:55:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:45.878 00:55:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:45.878 00:55:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:45.878 00:55:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:45.878 00:55:20 -- common/autotest_common.sh@867 -- # local i 00:12:45.878 00:55:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.878 00:55:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.878 00:55:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:45.878 00:55:20 -- common/autotest_common.sh@871 -- # break 00:12:45.878 00:55:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.878 00:55:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.878 00:55:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.878 1+0 records in 00:12:45.878 1+0 records out 00:12:45.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770391 s, 5.3 MB/s 00:12:45.878 00:55:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.878 00:55:20 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.878 00:55:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.878 00:55:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.878 00:55:20 -- common/autotest_common.sh@887 -- # return 0 00:12:45.878 00:55:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.878 00:55:20 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.878 00:55:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:46.138 00:55:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:46.138 00:55:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:46.138 00:55:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:46.138 00:55:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:46.138 00:55:20 -- common/autotest_common.sh@867 -- # local i 00:12:46.138 00:55:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:46.138 00:55:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:46.138 00:55:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:46.138 00:55:20 -- common/autotest_common.sh@871 -- # break 00:12:46.138 00:55:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:46.138 00:55:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:46.138 00:55:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.138 1+0 records in 00:12:46.138 1+0 records out 00:12:46.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00178493 s, 2.3 MB/s 00:12:46.138 00:55:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.138 00:55:20 -- common/autotest_common.sh@884 -- # size=4096 00:12:46.138 00:55:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.138 00:55:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:46.138 00:55:20 -- common/autotest_common.sh@887 -- # return 0 00:12:46.138 00:55:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.138 00:55:20 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:46.138 00:55:20 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd0", 00:12:46.398 "bdev_name": "Malloc0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd1", 00:12:46.398 "bdev_name": "Malloc1p0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd2", 00:12:46.398 "bdev_name": "Malloc1p1" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd3", 00:12:46.398 "bdev_name": "Malloc2p0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd4", 00:12:46.398 "bdev_name": "Malloc2p1" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd5", 00:12:46.398 "bdev_name": "Malloc2p2" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd6", 00:12:46.398 "bdev_name": "Malloc2p3" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd7", 00:12:46.398 "bdev_name": "Malloc2p4" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd8", 00:12:46.398 "bdev_name": "Malloc2p5" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd9", 00:12:46.398 "bdev_name": "Malloc2p6" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd10", 00:12:46.398 "bdev_name": "Malloc2p7" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd11", 00:12:46.398 "bdev_name": "TestPT" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd12", 00:12:46.398 "bdev_name": "raid0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd13", 00:12:46.398 "bdev_name": "concat0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd14", 00:12:46.398 "bdev_name": "raid1" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd15", 00:12:46.398 "bdev_name": "AIO0" 00:12:46.398 } 00:12:46.398 ]' 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd0", 00:12:46.398 "bdev_name": "Malloc0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd1", 00:12:46.398 "bdev_name": "Malloc1p0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd2", 00:12:46.398 "bdev_name": "Malloc1p1" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd3", 00:12:46.398 "bdev_name": "Malloc2p0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd4", 00:12:46.398 "bdev_name": "Malloc2p1" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd5", 00:12:46.398 "bdev_name": "Malloc2p2" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd6", 00:12:46.398 "bdev_name": "Malloc2p3" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd7", 00:12:46.398 "bdev_name": "Malloc2p4" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd8", 00:12:46.398 "bdev_name": "Malloc2p5" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd9", 00:12:46.398 "bdev_name": "Malloc2p6" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd10", 00:12:46.398 "bdev_name": "Malloc2p7" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd11", 00:12:46.398 "bdev_name": "TestPT" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd12", 00:12:46.398 "bdev_name": "raid0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd13", 00:12:46.398 "bdev_name": "concat0" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd14", 00:12:46.398 "bdev_name": "raid1" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "nbd_device": "/dev/nbd15", 00:12:46.398 "bdev_name": "AIO0" 00:12:46.398 } 00:12:46.398 ]' 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@51 -- # local i 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.398 00:55:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@41 -- # break 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.658 00:55:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@41 -- # break 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.917 00:55:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@41 -- # break 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.176 00:55:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@41 -- # break 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.436 00:55:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@41 -- # break 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.696 00:55:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@41 -- # break 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.955 00:55:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@41 -- # break 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@41 -- # break 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.215 00:55:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@41 -- # break 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.475 00:55:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@41 -- # break 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.735 00:55:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@41 -- # break 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.994 00:55:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@41 -- # break 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.253 00:55:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@41 -- # break 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.512 00:55:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@41 -- # break 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.771 00:55:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@41 -- # break 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@41 -- # break 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.030 00:55:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.289 00:55:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:50.290 00:55:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:50.290 00:55:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@65 -- # true 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@65 -- # count=0 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@122 -- # count=0 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@127 -- # return 0 00:12:50.549 00:55:24 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:50.549 00:55:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@12 -- # local i 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.550 00:55:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:50.808 /dev/nbd0 00:12:50.808 00:55:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:50.808 00:55:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:50.808 00:55:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:50.808 00:55:24 -- common/autotest_common.sh@867 -- # local i 00:12:50.808 00:55:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.808 00:55:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.808 00:55:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:50.808 00:55:25 -- common/autotest_common.sh@871 -- # break 00:12:50.808 00:55:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.808 00:55:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.808 00:55:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.808 1+0 records in 00:12:50.808 1+0 records out 00:12:50.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245515 s, 16.7 MB/s 00:12:50.808 00:55:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.808 00:55:25 -- common/autotest_common.sh@884 -- # size=4096 00:12:50.808 00:55:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.808 00:55:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.808 00:55:25 -- common/autotest_common.sh@887 -- # return 0 00:12:50.808 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.808 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.808 00:55:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:50.808 /dev/nbd1 00:12:51.067 00:55:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.067 00:55:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.067 00:55:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:51.067 00:55:25 -- common/autotest_common.sh@867 -- # local i 00:12:51.067 00:55:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.067 00:55:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.067 00:55:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:51.067 00:55:25 -- common/autotest_common.sh@871 -- # break 00:12:51.067 00:55:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.067 00:55:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.067 00:55:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.067 1+0 records in 00:12:51.067 1+0 records out 00:12:51.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278809 s, 14.7 MB/s 00:12:51.067 00:55:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.067 00:55:25 -- common/autotest_common.sh@884 -- # size=4096 00:12:51.067 00:55:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.067 00:55:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.067 00:55:25 -- common/autotest_common.sh@887 -- # return 0 00:12:51.067 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.067 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.067 00:55:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:51.327 /dev/nbd10 00:12:51.327 00:55:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:51.327 00:55:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:51.327 00:55:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:51.327 00:55:25 -- common/autotest_common.sh@867 -- # local i 00:12:51.327 00:55:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.327 00:55:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.327 00:55:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:51.327 00:55:25 -- common/autotest_common.sh@871 -- # break 00:12:51.327 00:55:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.327 00:55:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.327 00:55:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.327 1+0 records in 00:12:51.327 1+0 records out 00:12:51.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285296 s, 14.4 MB/s 00:12:51.327 00:55:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.327 00:55:25 -- common/autotest_common.sh@884 -- # size=4096 00:12:51.327 00:55:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.327 00:55:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.327 00:55:25 -- common/autotest_common.sh@887 -- # return 0 00:12:51.327 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.327 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.327 00:55:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:51.586 /dev/nbd11 00:12:51.586 00:55:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:51.586 00:55:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:51.586 00:55:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:51.586 00:55:25 -- common/autotest_common.sh@867 -- # local i 00:12:51.587 00:55:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.587 00:55:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.587 00:55:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:51.587 00:55:25 -- common/autotest_common.sh@871 -- # break 00:12:51.587 00:55:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.587 00:55:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.587 00:55:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.587 1+0 records in 00:12:51.587 1+0 records out 00:12:51.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348228 s, 11.8 MB/s 00:12:51.587 00:55:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.587 00:55:25 -- common/autotest_common.sh@884 -- # size=4096 00:12:51.587 00:55:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.587 00:55:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.587 00:55:25 -- common/autotest_common.sh@887 -- # return 0 00:12:51.587 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.587 00:55:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.587 00:55:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:51.846 /dev/nbd12 00:12:51.846 00:55:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:51.846 00:55:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:51.846 00:55:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:51.846 00:55:26 -- common/autotest_common.sh@867 -- # local i 00:12:51.846 00:55:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.846 00:55:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.846 00:55:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:51.846 00:55:26 -- common/autotest_common.sh@871 -- # break 00:12:51.846 00:55:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.846 00:55:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.846 00:55:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.846 1+0 records in 00:12:51.846 1+0 records out 00:12:51.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494114 s, 8.3 MB/s 00:12:51.846 00:55:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.846 00:55:26 -- common/autotest_common.sh@884 -- # size=4096 00:12:51.846 00:55:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.846 00:55:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.846 00:55:26 -- common/autotest_common.sh@887 -- # return 0 00:12:51.846 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.846 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.846 00:55:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:52.106 /dev/nbd13 00:12:52.106 00:55:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:52.106 00:55:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:52.106 00:55:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:52.106 00:55:26 -- common/autotest_common.sh@867 -- # local i 00:12:52.106 00:55:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:52.106 00:55:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:52.106 00:55:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:52.106 00:55:26 -- common/autotest_common.sh@871 -- # break 00:12:52.106 00:55:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:52.106 00:55:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:52.106 00:55:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.106 1+0 records in 00:12:52.106 1+0 records out 00:12:52.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396691 s, 10.3 MB/s 00:12:52.106 00:55:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.106 00:55:26 -- common/autotest_common.sh@884 -- # size=4096 00:12:52.106 00:55:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.106 00:55:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:52.106 00:55:26 -- common/autotest_common.sh@887 -- # return 0 00:12:52.106 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.106 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.106 00:55:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:52.366 /dev/nbd14 00:12:52.366 00:55:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:52.366 00:55:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:52.366 00:55:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:52.366 00:55:26 -- common/autotest_common.sh@867 -- # local i 00:12:52.366 00:55:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:52.366 00:55:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:52.366 00:55:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:52.366 00:55:26 -- common/autotest_common.sh@871 -- # break 00:12:52.366 00:55:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:52.366 00:55:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:52.366 00:55:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.366 1+0 records in 00:12:52.366 1+0 records out 00:12:52.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686441 s, 6.0 MB/s 00:12:52.366 00:55:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.366 00:55:26 -- common/autotest_common.sh@884 -- # size=4096 00:12:52.366 00:55:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.366 00:55:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:52.366 00:55:26 -- common/autotest_common.sh@887 -- # return 0 00:12:52.366 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.366 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.366 00:55:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:52.625 /dev/nbd15 00:12:52.625 00:55:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:52.625 00:55:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:52.625 00:55:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:52.625 00:55:26 -- common/autotest_common.sh@867 -- # local i 00:12:52.625 00:55:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:52.625 00:55:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:52.625 00:55:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:52.625 00:55:26 -- common/autotest_common.sh@871 -- # break 00:12:52.625 00:55:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:52.625 00:55:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:52.625 00:55:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.625 1+0 records in 00:12:52.625 1+0 records out 00:12:52.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921032 s, 4.4 MB/s 00:12:52.625 00:55:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.625 00:55:26 -- common/autotest_common.sh@884 -- # size=4096 00:12:52.625 00:55:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.625 00:55:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:52.625 00:55:26 -- common/autotest_common.sh@887 -- # return 0 00:12:52.625 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.625 00:55:26 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.625 00:55:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:52.884 /dev/nbd2 00:12:52.884 00:55:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:52.884 00:55:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:52.884 00:55:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:52.884 00:55:27 -- common/autotest_common.sh@867 -- # local i 00:12:52.884 00:55:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:52.884 00:55:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:52.884 00:55:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:52.884 00:55:27 -- common/autotest_common.sh@871 -- # break 00:12:52.884 00:55:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:52.884 00:55:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:52.884 00:55:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.884 1+0 records in 00:12:52.884 1+0 records out 00:12:52.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539373 s, 7.6 MB/s 00:12:52.884 00:55:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.884 00:55:27 -- common/autotest_common.sh@884 -- # size=4096 00:12:52.884 00:55:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.884 00:55:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:52.884 00:55:27 -- common/autotest_common.sh@887 -- # return 0 00:12:52.884 00:55:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.884 00:55:27 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.885 00:55:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:53.144 /dev/nbd3 00:12:53.144 00:55:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:53.144 00:55:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:53.144 00:55:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:53.144 00:55:27 -- common/autotest_common.sh@867 -- # local i 00:12:53.144 00:55:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:53.144 00:55:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:53.144 00:55:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:53.144 00:55:27 -- common/autotest_common.sh@871 -- # break 00:12:53.144 00:55:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:53.144 00:55:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:53.144 00:55:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.144 1+0 records in 00:12:53.144 1+0 records out 00:12:53.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052261 s, 7.8 MB/s 00:12:53.144 00:55:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.144 00:55:27 -- common/autotest_common.sh@884 -- # size=4096 00:12:53.144 00:55:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.144 00:55:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:53.144 00:55:27 -- common/autotest_common.sh@887 -- # return 0 00:12:53.144 00:55:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.144 00:55:27 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.144 00:55:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:53.462 /dev/nbd4 00:12:53.462 00:55:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:53.462 00:55:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:53.462 00:55:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:53.462 00:55:27 -- common/autotest_common.sh@867 -- # local i 00:12:53.462 00:55:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:53.462 00:55:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:53.462 00:55:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:53.462 00:55:27 -- common/autotest_common.sh@871 -- # break 00:12:53.462 00:55:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:53.462 00:55:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:53.462 00:55:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.462 1+0 records in 00:12:53.462 1+0 records out 00:12:53.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470034 s, 8.7 MB/s 00:12:53.462 00:55:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.462 00:55:27 -- common/autotest_common.sh@884 -- # size=4096 00:12:53.462 00:55:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.462 00:55:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:53.462 00:55:27 -- common/autotest_common.sh@887 -- # return 0 00:12:53.462 00:55:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.462 00:55:27 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.462 00:55:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:53.720 /dev/nbd5 00:12:53.720 00:55:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:53.720 00:55:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:53.720 00:55:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:53.720 00:55:28 -- common/autotest_common.sh@867 -- # local i 00:12:53.720 00:55:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:53.720 00:55:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:53.720 00:55:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:53.720 00:55:28 -- common/autotest_common.sh@871 -- # break 00:12:53.720 00:55:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:53.720 00:55:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:53.720 00:55:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.720 1+0 records in 00:12:53.720 1+0 records out 00:12:53.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596033 s, 6.9 MB/s 00:12:53.720 00:55:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.720 00:55:28 -- common/autotest_common.sh@884 -- # size=4096 00:12:53.720 00:55:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.720 00:55:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:53.720 00:55:28 -- common/autotest_common.sh@887 -- # return 0 00:12:53.720 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.720 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.720 00:55:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:53.980 /dev/nbd6 00:12:53.980 00:55:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:53.980 00:55:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:53.980 00:55:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:53.980 00:55:28 -- common/autotest_common.sh@867 -- # local i 00:12:53.980 00:55:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:53.980 00:55:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:53.980 00:55:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:53.980 00:55:28 -- common/autotest_common.sh@871 -- # break 00:12:53.980 00:55:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:53.980 00:55:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:53.980 00:55:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.980 1+0 records in 00:12:53.980 1+0 records out 00:12:53.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728339 s, 5.6 MB/s 00:12:53.980 00:55:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.980 00:55:28 -- common/autotest_common.sh@884 -- # size=4096 00:12:53.980 00:55:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.980 00:55:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:53.980 00:55:28 -- common/autotest_common.sh@887 -- # return 0 00:12:53.980 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.980 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.980 00:55:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:54.239 /dev/nbd7 00:12:54.239 00:55:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:54.239 00:55:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:54.239 00:55:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:54.239 00:55:28 -- common/autotest_common.sh@867 -- # local i 00:12:54.239 00:55:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:54.239 00:55:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:54.239 00:55:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:54.239 00:55:28 -- common/autotest_common.sh@871 -- # break 00:12:54.239 00:55:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:54.239 00:55:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:54.239 00:55:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.239 1+0 records in 00:12:54.239 1+0 records out 00:12:54.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000827437 s, 5.0 MB/s 00:12:54.239 00:55:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.239 00:55:28 -- common/autotest_common.sh@884 -- # size=4096 00:12:54.239 00:55:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.239 00:55:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:54.239 00:55:28 -- common/autotest_common.sh@887 -- # return 0 00:12:54.239 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.239 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.239 00:55:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:54.498 /dev/nbd8 00:12:54.498 00:55:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:54.498 00:55:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:54.498 00:55:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:54.498 00:55:28 -- common/autotest_common.sh@867 -- # local i 00:12:54.498 00:55:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:54.498 00:55:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:54.498 00:55:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:54.498 00:55:28 -- common/autotest_common.sh@871 -- # break 00:12:54.498 00:55:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:54.498 00:55:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:54.498 00:55:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.498 1+0 records in 00:12:54.498 1+0 records out 00:12:54.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087958 s, 4.7 MB/s 00:12:54.498 00:55:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.498 00:55:28 -- common/autotest_common.sh@884 -- # size=4096 00:12:54.498 00:55:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.758 00:55:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:54.758 00:55:28 -- common/autotest_common.sh@887 -- # return 0 00:12:54.758 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.758 00:55:28 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.758 00:55:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:55.016 /dev/nbd9 00:12:55.016 00:55:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:55.016 00:55:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:55.016 00:55:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:55.016 00:55:29 -- common/autotest_common.sh@867 -- # local i 00:12:55.016 00:55:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:55.016 00:55:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:55.016 00:55:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:55.016 00:55:29 -- common/autotest_common.sh@871 -- # break 00:12:55.016 00:55:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:55.016 00:55:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:55.016 00:55:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.016 1+0 records in 00:12:55.016 1+0 records out 00:12:55.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102059 s, 4.0 MB/s 00:12:55.016 00:55:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.016 00:55:29 -- common/autotest_common.sh@884 -- # size=4096 00:12:55.016 00:55:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.016 00:55:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:55.016 00:55:29 -- common/autotest_common.sh@887 -- # return 0 00:12:55.016 00:55:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.016 00:55:29 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:55.016 00:55:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:55.016 00:55:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.016 00:55:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd0", 00:12:55.275 "bdev_name": "Malloc0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd1", 00:12:55.275 "bdev_name": "Malloc1p0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd10", 00:12:55.275 "bdev_name": "Malloc1p1" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd11", 00:12:55.275 "bdev_name": "Malloc2p0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd12", 00:12:55.275 "bdev_name": "Malloc2p1" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd13", 00:12:55.275 "bdev_name": "Malloc2p2" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd14", 00:12:55.275 "bdev_name": "Malloc2p3" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd15", 00:12:55.275 "bdev_name": "Malloc2p4" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd2", 00:12:55.275 "bdev_name": "Malloc2p5" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd3", 00:12:55.275 "bdev_name": "Malloc2p6" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd4", 00:12:55.275 "bdev_name": "Malloc2p7" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd5", 00:12:55.275 "bdev_name": "TestPT" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd6", 00:12:55.275 "bdev_name": "raid0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd7", 00:12:55.275 "bdev_name": "concat0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd8", 00:12:55.275 "bdev_name": "raid1" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd9", 00:12:55.275 "bdev_name": "AIO0" 00:12:55.275 } 00:12:55.275 ]' 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd0", 00:12:55.275 "bdev_name": "Malloc0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd1", 00:12:55.275 "bdev_name": "Malloc1p0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd10", 00:12:55.275 "bdev_name": "Malloc1p1" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd11", 00:12:55.275 "bdev_name": "Malloc2p0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd12", 00:12:55.275 "bdev_name": "Malloc2p1" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd13", 00:12:55.275 "bdev_name": "Malloc2p2" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd14", 00:12:55.275 "bdev_name": "Malloc2p3" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd15", 00:12:55.275 "bdev_name": "Malloc2p4" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd2", 00:12:55.275 "bdev_name": "Malloc2p5" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd3", 00:12:55.275 "bdev_name": "Malloc2p6" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd4", 00:12:55.275 "bdev_name": "Malloc2p7" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd5", 00:12:55.275 "bdev_name": "TestPT" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd6", 00:12:55.275 "bdev_name": "raid0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd7", 00:12:55.275 "bdev_name": "concat0" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd8", 00:12:55.275 "bdev_name": "raid1" 00:12:55.275 }, 00:12:55.275 { 00:12:55.275 "nbd_device": "/dev/nbd9", 00:12:55.275 "bdev_name": "AIO0" 00:12:55.275 } 00:12:55.275 ]' 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:55.275 /dev/nbd1 00:12:55.275 /dev/nbd10 00:12:55.275 /dev/nbd11 00:12:55.275 /dev/nbd12 00:12:55.275 /dev/nbd13 00:12:55.275 /dev/nbd14 00:12:55.275 /dev/nbd15 00:12:55.275 /dev/nbd2 00:12:55.275 /dev/nbd3 00:12:55.275 /dev/nbd4 00:12:55.275 /dev/nbd5 00:12:55.275 /dev/nbd6 00:12:55.275 /dev/nbd7 00:12:55.275 /dev/nbd8 00:12:55.275 /dev/nbd9' 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:55.275 /dev/nbd1 00:12:55.275 /dev/nbd10 00:12:55.275 /dev/nbd11 00:12:55.275 /dev/nbd12 00:12:55.275 /dev/nbd13 00:12:55.275 /dev/nbd14 00:12:55.275 /dev/nbd15 00:12:55.275 /dev/nbd2 00:12:55.275 /dev/nbd3 00:12:55.275 /dev/nbd4 00:12:55.275 /dev/nbd5 00:12:55.275 /dev/nbd6 00:12:55.275 /dev/nbd7 00:12:55.275 /dev/nbd8 00:12:55.275 /dev/nbd9' 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@65 -- # count=16 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@95 -- # count=16 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:55.275 256+0 records in 00:12:55.275 256+0 records out 00:12:55.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010956 s, 95.7 MB/s 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.275 00:55:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:55.534 256+0 records in 00:12:55.534 256+0 records out 00:12:55.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151391 s, 6.9 MB/s 00:12:55.534 00:55:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.534 00:55:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:55.534 256+0 records in 00:12:55.534 256+0 records out 00:12:55.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157279 s, 6.7 MB/s 00:12:55.534 00:55:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.534 00:55:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:55.793 256+0 records in 00:12:55.793 256+0 records out 00:12:55.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153561 s, 6.8 MB/s 00:12:55.793 00:55:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.793 00:55:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:55.793 256+0 records in 00:12:55.793 256+0 records out 00:12:55.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156346 s, 6.7 MB/s 00:12:55.793 00:55:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.794 00:55:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:56.053 256+0 records in 00:12:56.053 256+0 records out 00:12:56.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153602 s, 6.8 MB/s 00:12:56.053 00:55:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.053 00:55:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:56.312 256+0 records in 00:12:56.312 256+0 records out 00:12:56.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155441 s, 6.7 MB/s 00:12:56.312 00:55:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.312 00:55:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:56.312 256+0 records in 00:12:56.312 256+0 records out 00:12:56.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153307 s, 6.8 MB/s 00:12:56.312 00:55:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.312 00:55:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:56.571 256+0 records in 00:12:56.571 256+0 records out 00:12:56.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154166 s, 6.8 MB/s 00:12:56.571 00:55:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.571 00:55:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:56.830 256+0 records in 00:12:56.830 256+0 records out 00:12:56.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157626 s, 6.7 MB/s 00:12:56.830 00:55:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.830 00:55:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:56.830 256+0 records in 00:12:56.830 256+0 records out 00:12:56.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15369 s, 6.8 MB/s 00:12:56.830 00:55:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.830 00:55:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:57.089 256+0 records in 00:12:57.089 256+0 records out 00:12:57.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153897 s, 6.8 MB/s 00:12:57.089 00:55:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.089 00:55:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:57.089 256+0 records in 00:12:57.089 256+0 records out 00:12:57.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154582 s, 6.8 MB/s 00:12:57.089 00:55:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.089 00:55:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:57.348 256+0 records in 00:12:57.348 256+0 records out 00:12:57.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156875 s, 6.7 MB/s 00:12:57.348 00:55:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.348 00:55:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:57.608 256+0 records in 00:12:57.608 256+0 records out 00:12:57.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156421 s, 6.7 MB/s 00:12:57.608 00:55:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.608 00:55:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:57.608 256+0 records in 00:12:57.608 256+0 records out 00:12:57.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159494 s, 6.6 MB/s 00:12:57.608 00:55:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.608 00:55:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:57.868 256+0 records in 00:12:57.868 256+0 records out 00:12:57.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197001 s, 5.3 MB/s 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.868 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:58.127 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.127 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:58.127 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.127 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:58.127 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@51 -- # local i 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.128 00:55:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@41 -- # break 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.387 00:55:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@41 -- # break 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.646 00:55:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@41 -- # break 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.905 00:55:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@41 -- # break 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.165 00:55:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@41 -- # break 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.424 00:55:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@41 -- # break 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.684 00:55:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@41 -- # break 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:59.943 00:55:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@41 -- # break 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.202 00:55:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@41 -- # break 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@41 -- # break 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.462 00:55:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:00.722 00:55:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:00.722 00:55:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:00.722 00:55:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:00.722 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.722 00:55:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.722 00:55:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:00.722 00:55:35 -- bdev/nbd_common.sh@41 -- # break 00:13:00.722 00:55:35 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.722 00:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.722 00:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@41 -- # break 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.981 00:55:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:01.240 00:55:35 -- bdev/nbd_common.sh@41 -- # break 00:13:01.240 00:55:35 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.240 00:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.240 00:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:01.240 00:55:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@41 -- # break 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@41 -- # break 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.500 00:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@41 -- # break 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.758 00:55:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.020 00:55:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@65 -- # true 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@65 -- # count=0 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@104 -- # count=0 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@109 -- # return 0 00:13:02.021 00:55:36 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:02.021 00:55:36 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:02.286 malloc_lvol_verify 00:13:02.286 00:55:36 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:02.549 90ed6ef1-f15c-4031-83bc-40f72bada223 00:13:02.549 00:55:36 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:02.808 2aadd80a-9874-438f-8ba7-56b53a05679b 00:13:02.808 00:55:37 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:03.068 /dev/nbd0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:03.068 mke2fs 1.46.5 (30-Dec-2021) 00:13:03.068 00:13:03.068 Filesystem too small for a journal 00:13:03.068 Discarding device blocks: 0/1024 done 00:13:03.068 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:03.068 00:13:03.068 Allocating group tables: 0/1 done 00:13:03.068 Writing inode tables: 0/1 done 00:13:03.068 Writing superblocks and filesystem accounting information: 0/1 done 00:13:03.068 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@51 -- # local i 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@41 -- # break 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:03.068 00:55:37 -- bdev/nbd_common.sh@147 -- # return 0 00:13:03.068 00:55:37 -- bdev/blockdev.sh@324 -- # killprocess 119961 00:13:03.068 00:55:37 -- common/autotest_common.sh@936 -- # '[' -z 119961 ']' 00:13:03.068 00:55:37 -- common/autotest_common.sh@940 -- # kill -0 119961 00:13:03.068 00:55:37 -- common/autotest_common.sh@941 -- # uname 00:13:03.068 00:55:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.068 00:55:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119961 00:13:03.068 00:55:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:03.068 00:55:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:03.068 00:55:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119961' 00:13:03.068 killing process with pid 119961 00:13:03.068 00:55:37 -- common/autotest_common.sh@955 -- # kill 119961 00:13:03.068 00:55:37 -- common/autotest_common.sh@960 -- # wait 119961 00:13:04.006 ************************************ 00:13:04.006 END TEST bdev_nbd 00:13:04.006 ************************************ 00:13:04.006 00:55:38 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:04.006 00:13:04.006 real 0m23.068s 00:13:04.006 user 0m29.622s 00:13:04.006 sys 0m11.014s 00:13:04.006 00:55:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.006 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:04.006 00:55:38 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:04.006 00:55:38 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:13:04.006 00:55:38 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:13:04.006 00:55:38 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:04.006 00:55:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:04.006 00:55:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.006 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:04.006 ************************************ 00:13:04.006 START TEST bdev_fio 00:13:04.006 ************************************ 00:13:04.006 00:55:38 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:13:04.006 00:55:38 -- bdev/blockdev.sh@329 -- # local env_context 00:13:04.006 00:55:38 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:04.006 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:04.006 00:55:38 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:04.006 00:55:38 -- bdev/blockdev.sh@337 -- # echo '' 00:13:04.006 00:55:38 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:04.006 00:55:38 -- bdev/blockdev.sh@337 -- # env_context= 00:13:04.006 00:55:38 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:04.006 00:55:38 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:04.007 00:55:38 -- common/autotest_common.sh@1270 -- # local workload=verify 00:13:04.007 00:55:38 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:13:04.007 00:55:38 -- common/autotest_common.sh@1272 -- # local env_context= 00:13:04.007 00:55:38 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:13:04.007 00:55:38 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:04.007 00:55:38 -- common/autotest_common.sh@1290 -- # cat 00:13:04.007 00:55:38 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1303 -- # cat 00:13:04.007 00:55:38 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:13:04.007 00:55:38 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:04.007 00:55:38 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:13:04.007 00:55:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:04.007 00:55:38 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:13:04.007 00:55:38 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:04.007 00:55:38 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:04.007 00:55:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.007 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:04.007 ************************************ 00:13:04.007 START TEST bdev_fio_rw_verify 00:13:04.007 ************************************ 00:13:04.007 00:55:38 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:04.007 00:55:38 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:04.007 00:55:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:04.007 00:55:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:04.007 00:55:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:04.007 00:55:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:04.007 00:55:38 -- common/autotest_common.sh@1330 -- # shift 00:13:04.007 00:55:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:04.007 00:55:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:04.007 00:55:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:04.007 00:55:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:04.007 00:55:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:04.007 00:55:38 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:04.007 00:55:38 -- common/autotest_common.sh@1336 -- # break 00:13:04.007 00:55:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:04.007 00:55:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:04.267 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.267 fio-3.35 00:13:04.267 Starting 16 threads 00:13:16.478 00:13:16.478 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=121102: Mon Nov 18 00:55:49 2024 00:13:16.478 read: IOPS=84.0k, BW=328MiB/s (344MB/s)(3284MiB/10006msec) 00:13:16.478 slat (nsec): min=1890, max=36042k, avg=31299.03, stdev=402497.50 00:13:16.478 clat (usec): min=7, max=51748, avg=262.80, stdev=1207.17 00:13:16.478 lat (usec): min=21, max=51764, avg=294.10, stdev=1272.19 00:13:16.478 clat percentiles (usec): 00:13:16.478 | 50.000th=[ 155], 99.000th=[ 635], 99.900th=[16319], 99.990th=[24773], 00:13:16.478 | 99.999th=[51119] 00:13:16.478 write: IOPS=136k, BW=530MiB/s (556MB/s)(5249MiB/9904msec); 0 zone resets 00:13:16.478 slat (usec): min=4, max=70366, avg=60.38, stdev=646.82 00:13:16.478 clat (usec): min=7, max=70711, avg=342.06, stdev=1448.36 00:13:16.478 lat (usec): min=32, max=70735, avg=402.44, stdev=1587.07 00:13:16.478 clat percentiles (usec): 00:13:16.478 | 50.000th=[ 198], 99.000th=[ 4228], 99.900th=[20317], 99.990th=[35390], 00:13:16.478 | 99.999th=[49546] 00:13:16.478 bw ( KiB/s): min=349592, max=879096, per=98.90%, avg=536741.47, stdev=9285.27, samples=304 00:13:16.478 iops : min=87398, max=219774, avg=134185.26, stdev=2321.32, samples=304 00:13:16.478 lat (usec) : 10=0.01%, 20=0.01%, 50=0.90%, 100=14.65%, 250=61.69% 00:13:16.478 lat (usec) : 500=20.27%, 750=1.23%, 1000=0.14% 00:13:16.478 lat (msec) : 2=0.10%, 4=0.09%, 10=0.26%, 20=0.55%, 50=0.08% 00:13:16.478 lat (msec) : 100=0.01% 00:13:16.478 cpu : usr=55.44%, sys=2.06%, ctx=236951, majf=2, minf=103371 00:13:16.478 IO depths : 1=11.5%, 2=24.2%, 4=51.3%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.478 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.478 issued rwts: total=840688,1343697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.478 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:16.478 00:13:16.478 Run status group 0 (all jobs): 00:13:16.478 READ: bw=328MiB/s (344MB/s), 328MiB/s-328MiB/s (344MB/s-344MB/s), io=3284MiB (3443MB), run=10006-10006msec 00:13:16.478 WRITE: bw=530MiB/s (556MB/s), 530MiB/s-530MiB/s (556MB/s-556MB/s), io=5249MiB (5504MB), run=9904-9904msec 00:13:16.478 ----------------------------------------------------- 00:13:16.478 Suppressions used: 00:13:16.478 count bytes template 00:13:16.478 16 140 /usr/src/fio/parse.c 00:13:16.478 12515 1201440 /usr/src/fio/iolog.c 00:13:16.478 1 904 libcrypto.so 00:13:16.478 ----------------------------------------------------- 00:13:16.478 00:13:16.478 ************************************ 00:13:16.478 END TEST bdev_fio_rw_verify 00:13:16.478 ************************************ 00:13:16.478 00:13:16.478 real 0m12.272s 00:13:16.478 user 1m31.762s 00:13:16.478 sys 0m4.397s 00:13:16.478 00:55:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:16.478 00:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:16.478 00:55:50 -- bdev/blockdev.sh@348 -- # rm -f 00:13:16.478 00:55:50 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.478 00:55:50 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:16.478 00:55:50 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.478 00:55:50 -- common/autotest_common.sh@1270 -- # local workload=trim 00:13:16.478 00:55:50 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:13:16.478 00:55:50 -- common/autotest_common.sh@1272 -- # local env_context= 00:13:16.478 00:55:50 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:13:16.478 00:55:50 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:16.478 00:55:50 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:13:16.478 00:55:50 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:13:16.478 00:55:50 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.478 00:55:50 -- common/autotest_common.sh@1290 -- # cat 00:13:16.478 00:55:50 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:13:16.478 00:55:50 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:13:16.478 00:55:50 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:13:16.478 00:55:50 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:16.479 00:55:50 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2f552dcc-3674-4ddc-bdc3-57ac3f42d480"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2f552dcc-3674-4ddc-bdc3-57ac3f42d480",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "04690189-c5c6-5825-8c80-ee8b2c8f4eb0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "04690189-c5c6-5825-8c80-ee8b2c8f4eb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6329a1bc-e850-5f8f-8e68-26f2940c72e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6329a1bc-e850-5f8f-8e68-26f2940c72e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "da429998-f024-564a-b23e-207b91669d72"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "da429998-f024-564a-b23e-207b91669d72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c4f8ce9b-7b73-5157-85f2-56ba96ee1c2c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4f8ce9b-7b73-5157-85f2-56ba96ee1c2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f201412f-8107-55aa-a8e0-6ab94db7ddb7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f201412f-8107-55aa-a8e0-6ab94db7ddb7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6ebd439c-c4c4-50da-915a-d931696c4ae4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6ebd439c-c4c4-50da-915a-d931696c4ae4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c66b342b-1498-59a9-a2ae-58330ee05455"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c66b342b-1498-59a9-a2ae-58330ee05455",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "07823cc9-566b-53dd-bf87-14ff25292a0f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07823cc9-566b-53dd-bf87-14ff25292a0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "56ff126d-0df4-5cc0-a8e5-2ebaa10b11db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56ff126d-0df4-5cc0-a8e5-2ebaa10b11db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "15d03432-f62f-5a56-97d9-eb590cf337e0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "15d03432-f62f-5a56-97d9-eb590cf337e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "40bee1b1-8787-50e6-a51e-ccbd59417d2d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "40bee1b1-8787-50e6-a51e-ccbd59417d2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "45af5eda-1584-4764-9389-92e20a57f1fe"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "45af5eda-1584-4764-9389-92e20a57f1fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "45af5eda-1584-4764-9389-92e20a57f1fe",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "23880358-058b-42fa-b12f-6eb093dabf8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8c477bed-7a78-4dd6-98a9-7a040b561fdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "eddabf7d-5abe-4238-ac79-4b82a7646535"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "eddabf7d-5abe-4238-ac79-4b82a7646535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "eddabf7d-5abe-4238-ac79-4b82a7646535",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c9bfca97-3010-473a-a9e3-0be2032c8998",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "0e4ea2fc-313a-4316-bb64-b38d04b2b77e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d4407d14-459a-4957-a811-b0f1e3d8c097"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4407d14-459a-4957-a811-b0f1e3d8c097",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d4407d14-459a-4957-a811-b0f1e3d8c097",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "62f657c2-89d2-4894-9364-11ea6dc46397",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "dfd71065-cfeb-441a-98a9-12a3d91b547d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "3d53e94c-5cbf-49b5-bd2b-ad50c2cc619a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "3d53e94c-5cbf-49b5-bd2b-ad50c2cc619a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:16.479 00:55:50 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:16.479 Malloc1p0 00:13:16.479 Malloc1p1 00:13:16.479 Malloc2p0 00:13:16.479 Malloc2p1 00:13:16.479 Malloc2p2 00:13:16.479 Malloc2p3 00:13:16.479 Malloc2p4 00:13:16.479 Malloc2p5 00:13:16.479 Malloc2p6 00:13:16.479 Malloc2p7 00:13:16.479 TestPT 00:13:16.479 raid0 00:13:16.479 concat0 ]] 00:13:16.479 00:55:50 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2f552dcc-3674-4ddc-bdc3-57ac3f42d480"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2f552dcc-3674-4ddc-bdc3-57ac3f42d480",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "04690189-c5c6-5825-8c80-ee8b2c8f4eb0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "04690189-c5c6-5825-8c80-ee8b2c8f4eb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6329a1bc-e850-5f8f-8e68-26f2940c72e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6329a1bc-e850-5f8f-8e68-26f2940c72e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "da429998-f024-564a-b23e-207b91669d72"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "da429998-f024-564a-b23e-207b91669d72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c4f8ce9b-7b73-5157-85f2-56ba96ee1c2c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4f8ce9b-7b73-5157-85f2-56ba96ee1c2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f201412f-8107-55aa-a8e0-6ab94db7ddb7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f201412f-8107-55aa-a8e0-6ab94db7ddb7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6ebd439c-c4c4-50da-915a-d931696c4ae4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6ebd439c-c4c4-50da-915a-d931696c4ae4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c66b342b-1498-59a9-a2ae-58330ee05455"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c66b342b-1498-59a9-a2ae-58330ee05455",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "07823cc9-566b-53dd-bf87-14ff25292a0f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07823cc9-566b-53dd-bf87-14ff25292a0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "56ff126d-0df4-5cc0-a8e5-2ebaa10b11db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56ff126d-0df4-5cc0-a8e5-2ebaa10b11db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "15d03432-f62f-5a56-97d9-eb590cf337e0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "15d03432-f62f-5a56-97d9-eb590cf337e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "40bee1b1-8787-50e6-a51e-ccbd59417d2d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "40bee1b1-8787-50e6-a51e-ccbd59417d2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "45af5eda-1584-4764-9389-92e20a57f1fe"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "45af5eda-1584-4764-9389-92e20a57f1fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "45af5eda-1584-4764-9389-92e20a57f1fe",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "23880358-058b-42fa-b12f-6eb093dabf8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8c477bed-7a78-4dd6-98a9-7a040b561fdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "eddabf7d-5abe-4238-ac79-4b82a7646535"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "eddabf7d-5abe-4238-ac79-4b82a7646535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "eddabf7d-5abe-4238-ac79-4b82a7646535",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c9bfca97-3010-473a-a9e3-0be2032c8998",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "0e4ea2fc-313a-4316-bb64-b38d04b2b77e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d4407d14-459a-4957-a811-b0f1e3d8c097"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4407d14-459a-4957-a811-b0f1e3d8c097",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d4407d14-459a-4957-a811-b0f1e3d8c097",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "62f657c2-89d2-4894-9364-11ea6dc46397",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "dfd71065-cfeb-441a-98a9-12a3d91b547d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "3d53e94c-5cbf-49b5-bd2b-ad50c2cc619a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "3d53e94c-5cbf-49b5-bd2b-ad50c2cc619a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:16.481 00:55:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:16.481 00:55:50 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:16.481 00:55:50 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:16.481 00:55:50 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.481 00:55:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:16.481 00:55:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.481 00:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:16.481 ************************************ 00:13:16.481 START TEST bdev_fio_trim 00:13:16.481 ************************************ 00:13:16.481 00:55:50 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.481 00:55:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.481 00:55:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:16.481 00:55:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.481 00:55:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:16.481 00:55:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.481 00:55:50 -- common/autotest_common.sh@1330 -- # shift 00:13:16.481 00:55:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:16.481 00:55:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.481 00:55:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.481 00:55:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:16.481 00:55:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:16.481 00:55:50 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:16.481 00:55:50 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:16.481 00:55:50 -- common/autotest_common.sh@1336 -- # break 00:13:16.481 00:55:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:16.481 00:55:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.741 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.741 fio-3.35 00:13:16.741 Starting 14 threads 00:13:28.956 00:13:28.956 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=121311: Mon Nov 18 00:56:01 2024 00:13:28.956 write: IOPS=175k, BW=684MiB/s (717MB/s)(6841MiB/10002msec); 0 zone resets 00:13:28.956 slat (usec): min=2, max=24061, avg=27.03, stdev=337.50 00:13:28.956 clat (usec): min=20, max=32226, avg=212.17, stdev=1011.13 00:13:28.956 lat (usec): min=33, max=32241, avg=239.20, stdev=1065.28 00:13:28.956 clat percentiles (usec): 00:13:28.956 | 50.000th=[ 137], 99.000th=[ 515], 99.900th=[16188], 99.990th=[20055], 00:13:28.956 | 99.999th=[28181] 00:13:28.956 bw ( KiB/s): min=488496, max=997568, per=99.91%, avg=699716.47, stdev=12657.31, samples=266 00:13:28.956 iops : min=122124, max=249392, avg=174929.05, stdev=3164.33, samples=266 00:13:28.956 trim: IOPS=175k, BW=684MiB/s (717MB/s)(6841MiB/10002msec); 0 zone resets 00:13:28.956 slat (usec): min=4, max=28073, avg=19.87, stdev=283.73 00:13:28.956 clat (usec): min=3, max=32241, avg=213.62, stdev=949.90 00:13:28.956 lat (usec): min=12, max=32252, avg=233.49, stdev=991.21 00:13:28.956 clat percentiles (usec): 00:13:28.956 | 50.000th=[ 153], 99.000th=[ 293], 99.900th=[16188], 99.990th=[20055], 00:13:28.956 | 99.999th=[28181] 00:13:28.956 bw ( KiB/s): min=488432, max=997504, per=99.91%, avg=699716.47, stdev=12656.67, samples=266 00:13:28.956 iops : min=122108, max=249376, avg=174929.05, stdev=3164.17, samples=266 00:13:28.956 lat (usec) : 4=0.01%, 10=0.12%, 20=0.27%, 50=1.01%, 100=17.39% 00:13:28.956 lat (usec) : 250=77.71%, 500=2.67%, 750=0.36%, 1000=0.01% 00:13:28.956 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.39%, 50=0.01% 00:13:28.956 cpu : usr=69.11%, sys=0.53%, ctx=173065, majf=0, minf=9062 00:13:28.956 IO depths : 1=12.3%, 2=24.7%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:28.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.956 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.956 issued rwts: total=0,1751245,1751246,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:28.956 00:13:28.956 Run status group 0 (all jobs): 00:13:28.956 WRITE: bw=684MiB/s (717MB/s), 684MiB/s-684MiB/s (717MB/s-717MB/s), io=6841MiB (7173MB), run=10002-10002msec 00:13:28.956 TRIM: bw=684MiB/s (717MB/s), 684MiB/s-684MiB/s (717MB/s-717MB/s), io=6841MiB (7173MB), run=10002-10002msec 00:13:28.956 ----------------------------------------------------- 00:13:28.956 Suppressions used: 00:13:28.956 count bytes template 00:13:28.956 14 129 /usr/src/fio/parse.c 00:13:28.956 1 904 libcrypto.so 00:13:28.956 ----------------------------------------------------- 00:13:28.956 00:13:28.956 ************************************ 00:13:28.956 END TEST bdev_fio_trim 00:13:28.956 ************************************ 00:13:28.956 00:13:28.956 real 0m11.876s 00:13:28.956 user 1m39.262s 00:13:28.956 sys 0m1.680s 00:13:28.956 00:56:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:28.956 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.956 00:56:02 -- bdev/blockdev.sh@366 -- # rm -f 00:13:28.956 00:56:02 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.956 00:56:02 -- bdev/blockdev.sh@368 -- # popd 00:13:28.956 /home/vagrant/spdk_repo/spdk 00:13:28.956 00:56:02 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:28.956 00:13:28.956 real 0m24.535s 00:13:28.956 user 3m11.202s 00:13:28.956 sys 0m6.259s 00:13:28.956 00:56:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:28.956 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.956 ************************************ 00:13:28.956 END TEST bdev_fio 00:13:28.956 ************************************ 00:13:28.956 00:56:02 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:28.956 00:56:02 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.956 00:56:02 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:28.956 00:56:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:28.956 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.956 ************************************ 00:13:28.956 START TEST bdev_verify 00:13:28.956 ************************************ 00:13:28.956 00:56:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.956 [2024-11-18 00:56:02.838543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:28.956 [2024-11-18 00:56:02.838930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121480 ] 00:13:28.956 [2024-11-18 00:56:02.986679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.956 [2024-11-18 00:56:03.058670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.956 [2024-11-18 00:56:03.058672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.956 [2024-11-18 00:56:03.237265] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:28.956 [2024-11-18 00:56:03.237619] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:28.956 [2024-11-18 00:56:03.245170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:28.956 [2024-11-18 00:56:03.245351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:28.956 [2024-11-18 00:56:03.253255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:28.957 [2024-11-18 00:56:03.253422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:28.957 [2024-11-18 00:56:03.253533] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:29.216 [2024-11-18 00:56:03.367846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:29.216 [2024-11-18 00:56:03.368154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.216 [2024-11-18 00:56:03.368267] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:29.216 [2024-11-18 00:56:03.368361] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.216 [2024-11-18 00:56:03.371636] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.216 [2024-11-18 00:56:03.371803] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:29.475 Running I/O for 5 seconds... 00:13:34.781 00:13:34.781 Latency(us) 00:13:34.781 [2024-11-18T00:56:09.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.781 [2024-11-18T00:56:09.180Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.781 Verification LBA range: start 0x0 length 0x1000 00:13:34.781 Malloc0 : 5.17 1726.18 6.74 0.00 0.00 73671.88 1997.29 208716.56 00:13:34.781 [2024-11-18T00:56:09.180Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.781 Verification LBA range: start 0x1000 length 0x1000 00:13:34.781 Malloc0 : 5.17 1702.57 6.65 0.00 0.00 74699.69 1716.42 269633.83 00:13:34.781 [2024-11-18T00:56:09.181Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x800 00:13:34.782 Malloc1p0 : 5.17 1195.63 4.67 0.00 0.00 106123.64 3760.52 128825.05 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x800 length 0x800 00:13:34.782 Malloc1p0 : 5.17 1195.63 4.67 0.00 0.00 106116.74 3760.52 128825.05 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x800 00:13:34.782 Malloc1p1 : 5.17 1194.96 4.67 0.00 0.00 105971.38 3854.14 123332.51 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x800 length 0x800 00:13:34.782 Malloc1p1 : 5.17 1194.96 4.67 0.00 0.00 105974.33 3869.74 123831.83 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p0 : 5.18 1194.29 4.67 0.00 0.00 105849.44 4868.39 117839.97 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p0 : 5.18 1194.29 4.67 0.00 0.00 105852.70 4868.39 118339.29 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p1 : 5.18 1193.64 4.66 0.00 0.00 105682.95 3651.29 112846.75 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p1 : 5.18 1193.63 4.66 0.00 0.00 105684.02 3651.29 112846.75 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p2 : 5.18 1192.99 4.66 0.00 0.00 105553.46 3417.23 109351.50 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p2 : 5.18 1192.99 4.66 0.00 0.00 105564.98 3339.22 109351.50 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p3 : 5.18 1192.34 4.66 0.00 0.00 105462.52 3370.42 105856.24 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p3 : 5.18 1192.34 4.66 0.00 0.00 105443.91 3386.03 106355.57 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p4 : 5.19 1191.70 4.66 0.00 0.00 105331.20 3510.86 102360.99 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p4 : 5.19 1191.69 4.66 0.00 0.00 105346.82 3495.25 102860.31 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p5 : 5.19 1191.06 4.65 0.00 0.00 105222.88 3323.61 99365.06 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p5 : 5.19 1191.05 4.65 0.00 0.00 105254.81 3401.63 99365.06 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p6 : 5.19 1190.45 4.65 0.00 0.00 105142.72 3604.48 95869.81 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p6 : 5.19 1190.44 4.65 0.00 0.00 105139.78 3542.06 95869.81 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x200 00:13:34.782 Malloc2p7 : 5.20 1189.83 4.65 0.00 0.00 105021.38 3386.03 92374.55 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x200 length 0x200 00:13:34.782 Malloc2p7 : 5.20 1189.81 4.65 0.00 0.00 105007.14 3464.05 92374.55 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x1000 00:13:34.782 TestPT : 5.20 1176.12 4.59 0.00 0.00 106102.74 8176.40 92374.55 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x1000 length 0x1000 00:13:34.782 TestPT : 5.20 1161.67 4.54 0.00 0.00 107477.86 7302.58 151793.86 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x2000 00:13:34.782 raid0 : 5.20 1188.65 4.64 0.00 0.00 104694.62 3448.44 85883.37 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x2000 length 0x2000 00:13:34.782 raid0 : 5.20 1188.62 4.64 0.00 0.00 104711.61 3417.23 85384.05 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.782 Verification LBA range: start 0x0 length 0x2000 00:13:34.782 concat0 : 5.20 1188.09 4.64 0.00 0.00 104592.04 3557.67 85883.37 00:13:34.782 [2024-11-18T00:56:09.181Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.783 Verification LBA range: start 0x2000 length 0x2000 00:13:34.783 concat0 : 5.20 1188.06 4.64 0.00 0.00 104612.92 3604.48 85384.05 00:13:34.783 [2024-11-18T00:56:09.182Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.783 Verification LBA range: start 0x0 length 0x1000 00:13:34.783 raid1 : 5.21 1204.53 4.71 0.00 0.00 103749.29 2106.51 85384.05 00:13:34.783 [2024-11-18T00:56:09.182Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.783 Verification LBA range: start 0x1000 length 0x1000 00:13:34.783 raid1 : 5.21 1204.50 4.71 0.00 0.00 103745.79 2075.31 84884.72 00:13:34.783 [2024-11-18T00:56:09.182Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.783 Verification LBA range: start 0x0 length 0x4e2 00:13:34.783 AIO0 : 5.21 1203.99 4.70 0.00 0.00 103520.56 3136.37 87880.66 00:13:34.783 [2024-11-18T00:56:09.182Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.783 Verification LBA range: start 0x4e2 length 0x4e2 00:13:34.783 AIO0 : 5.21 1203.96 4.70 0.00 0.00 103530.43 3105.16 86882.01 00:13:34.783 [2024-11-18T00:56:09.182Z] =================================================================================================================== 00:13:34.783 [2024-11-18T00:56:09.182Z] Total : 39190.67 153.09 0.00 0.00 102537.63 1716.42 269633.83 00:13:35.351 ************************************ 00:13:35.351 END TEST bdev_verify 00:13:35.351 ************************************ 00:13:35.351 00:13:35.351 real 0m6.791s 00:13:35.351 user 0m11.151s 00:13:35.351 sys 0m0.679s 00:13:35.351 00:56:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:35.351 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:13:35.351 00:56:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:35.351 00:56:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:35.351 00:56:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.351 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:13:35.351 ************************************ 00:13:35.351 START TEST bdev_verify_big_io 00:13:35.351 ************************************ 00:13:35.351 00:56:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:35.351 [2024-11-18 00:56:09.695502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:35.351 [2024-11-18 00:56:09.695961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121588 ] 00:13:35.611 [2024-11-18 00:56:09.851468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:35.611 [2024-11-18 00:56:09.920881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.611 [2024-11-18 00:56:09.920881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.870 [2024-11-18 00:56:10.123718] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:35.870 [2024-11-18 00:56:10.124116] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:35.870 [2024-11-18 00:56:10.131600] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:35.870 [2024-11-18 00:56:10.131860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:35.870 [2024-11-18 00:56:10.139683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:35.870 [2024-11-18 00:56:10.139905] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:35.870 [2024-11-18 00:56:10.140076] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:36.129 [2024-11-18 00:56:10.273236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:36.129 [2024-11-18 00:56:10.273674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.129 [2024-11-18 00:56:10.273832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:36.129 [2024-11-18 00:56:10.274074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.129 [2024-11-18 00:56:10.278419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.129 [2024-11-18 00:56:10.278647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:36.129 [2024-11-18 00:56:10.499830] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.501367] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.503548] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.505729] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.507108] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.509224] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.510579] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.512687] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:36.129 [2024-11-18 00:56:10.514094] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:36.130 [2024-11-18 00:56:10.516241] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:36.130 [2024-11-18 00:56:10.517592] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:36.130 [2024-11-18 00:56:10.519728] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:36.130 [2024-11-18 00:56:10.521088] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:36.130 [2024-11-18 00:56:10.523231] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:36.130 [2024-11-18 00:56:10.525505] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:36.130 [2024-11-18 00:56:10.526933] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:36.388 [2024-11-18 00:56:10.563763] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:36.388 [2024-11-18 00:56:10.566998] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:36.388 Running I/O for 5 seconds... 00:13:42.953 00:13:42.953 Latency(us) 00:13:42.953 [2024-11-18T00:56:17.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x100 00:13:42.953 Malloc0 : 5.57 384.02 24.00 0.00 0.00 323732.55 23093.64 906768.58 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x100 length 0x100 00:13:42.953 Malloc0 : 5.59 356.11 22.26 0.00 0.00 348905.95 18599.74 1078535.31 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x80 00:13:42.953 Malloc1p0 : 5.69 209.86 13.12 0.00 0.00 579488.77 47934.90 1078535.31 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x80 length 0x80 00:13:42.953 Malloc1p0 : 5.59 272.84 17.05 0.00 0.00 451038.85 47934.90 946714.33 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x80 00:13:42.953 Malloc1p1 : 5.81 124.67 7.79 0.00 0.00 956312.83 38198.13 1925385.26 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x80 length 0x80 00:13:42.953 Malloc1p1 : 5.81 124.51 7.78 0.00 0.00 959437.73 37948.46 1997287.62 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p0 : 5.64 72.30 4.52 0.00 0.00 416076.42 6366.35 715028.97 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x20 length 0x20 00:13:42.953 Malloc2p0 : 5.65 72.20 4.51 0.00 0.00 417322.70 6584.81 627148.31 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p1 : 5.64 72.29 4.52 0.00 0.00 414301.03 6709.64 699050.67 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x20 length 0x20 00:13:42.953 Malloc2p1 : 5.65 72.17 4.51 0.00 0.00 415664.57 6803.26 615164.59 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p2 : 5.65 72.27 4.52 0.00 0.00 412851.94 7365.00 687066.94 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x20 length 0x20 00:13:42.953 Malloc2p2 : 5.65 72.15 4.51 0.00 0.00 413902.60 7396.21 603180.86 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p3 : 5.65 72.26 4.52 0.00 0.00 410936.65 7926.74 671088.64 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x20 length 0x20 00:13:42.953 Malloc2p3 : 5.66 72.14 4.51 0.00 0.00 412381.76 7989.15 587202.56 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p4 : 5.65 72.25 4.52 0.00 0.00 409166.50 7021.71 655110.34 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x20 length 0x20 00:13:42.953 Malloc2p4 : 5.66 72.13 4.51 0.00 0.00 410527.78 7021.71 571224.26 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p5 : 5.65 72.23 4.51 0.00 0.00 407358.79 7458.62 639132.04 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x20 length 0x20 00:13:42.953 Malloc2p5 : 5.66 72.11 4.51 0.00 0.00 408772.62 7458.62 559240.53 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p6 : 5.65 72.22 4.51 0.00 0.00 405736.41 7084.13 627148.31 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x20 length 0x20 00:13:42.953 Malloc2p6 : 5.66 72.10 4.51 0.00 0.00 407037.07 6834.47 543262.23 00:13:42.953 [2024-11-18T00:56:17.352Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.953 Verification LBA range: start 0x0 length 0x20 00:13:42.953 Malloc2p7 : 5.65 72.20 4.51 0.00 0.00 404098.01 6147.90 611170.01 00:13:42.953 [2024-11-18T00:56:17.353Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x20 length 0x20 00:13:42.954 Malloc2p7 : 5.66 72.08 4.51 0.00 0.00 405524.15 6085.49 531278.51 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x0 length 0x100 00:13:42.954 TestPT : 5.84 129.86 8.12 0.00 0.00 878858.26 47185.92 1917396.11 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x100 length 0x100 00:13:42.954 TestPT : 5.85 119.33 7.46 0.00 0.00 960297.78 52428.80 2021255.07 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x0 length 0x200 00:13:42.954 raid0 : 5.77 137.97 8.62 0.00 0.00 821985.16 42692.02 1933374.42 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x200 length 0x200 00:13:42.954 raid0 : 5.84 129.70 8.11 0.00 0.00 872346.33 42941.68 1973320.17 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x0 length 0x200 00:13:42.954 concat0 : 5.88 145.82 9.11 0.00 0.00 763732.93 33204.91 1949352.72 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x200 length 0x200 00:13:42.954 concat0 : 5.85 141.62 8.85 0.00 0.00 795121.12 27213.04 1989298.47 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x0 length 0x100 00:13:42.954 raid1 : 5.84 160.26 10.02 0.00 0.00 688214.89 24466.77 1965331.02 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x100 length 0x100 00:13:42.954 raid1 : 5.85 155.65 9.73 0.00 0.00 713222.77 24341.94 1997287.62 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x0 length 0x4e 00:13:42.954 AIO0 : 5.89 174.34 10.90 0.00 0.00 380538.96 1942.67 1142448.52 00:13:42.954 [2024-11-18T00:56:17.353Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:42.954 Verification LBA range: start 0x4e length 0x4e 00:13:42.954 AIO0 : 5.85 158.42 9.90 0.00 0.00 422100.33 2543.42 1166415.97 00:13:42.954 [2024-11-18T00:56:17.353Z] =================================================================================================================== 00:13:42.954 [2024-11-18T00:56:17.353Z] Total : 4080.09 255.01 0.00 0.00 552137.87 1942.67 2021255.07 00:13:42.954 ************************************ 00:13:42.954 END TEST bdev_verify_big_io 00:13:42.954 ************************************ 00:13:42.954 00:13:42.954 real 0m7.655s 00:13:42.954 user 0m13.844s 00:13:42.954 sys 0m0.580s 00:13:42.954 00:56:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:42.954 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:13:42.954 00:56:17 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.954 00:56:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:42.954 00:56:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:42.954 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:13:43.213 ************************************ 00:13:43.213 START TEST bdev_write_zeroes 00:13:43.213 ************************************ 00:13:43.213 00:56:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.213 [2024-11-18 00:56:17.425477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:43.213 [2024-11-18 00:56:17.427170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121699 ] 00:13:43.213 [2024-11-18 00:56:17.582041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.473 [2024-11-18 00:56:17.670524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.473 [2024-11-18 00:56:17.855278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:43.473 [2024-11-18 00:56:17.855641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:43.473 [2024-11-18 00:56:17.863211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:43.473 [2024-11-18 00:56:17.863416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:43.473 [2024-11-18 00:56:17.871310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:43.473 [2024-11-18 00:56:17.871495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:43.473 [2024-11-18 00:56:17.871617] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:43.731 [2024-11-18 00:56:17.986775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:43.731 [2024-11-18 00:56:17.987162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.731 [2024-11-18 00:56:17.987259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:43.731 [2024-11-18 00:56:17.987369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.731 [2024-11-18 00:56:17.990471] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.731 [2024-11-18 00:56:17.990635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:43.990 Running I/O for 1 seconds... 00:13:45.366 00:13:45.366 Latency(us) 00:13:45.366 [2024-11-18T00:56:19.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc0 : 1.04 5763.15 22.51 0.00 0.00 22199.73 686.57 37199.48 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc1p0 : 1.05 5756.55 22.49 0.00 0.00 22185.90 908.92 36450.50 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc1p1 : 1.05 5750.34 22.46 0.00 0.00 22167.59 862.11 35701.52 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p0 : 1.05 5744.28 22.44 0.00 0.00 22154.34 905.02 34952.53 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p1 : 1.05 5738.32 22.42 0.00 0.00 22132.00 869.91 34203.55 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p2 : 1.05 5732.40 22.39 0.00 0.00 22110.80 897.22 33454.57 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p3 : 1.05 5726.34 22.37 0.00 0.00 22085.62 866.01 32705.58 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p4 : 1.05 5720.31 22.34 0.00 0.00 22074.19 901.12 31706.94 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p5 : 1.05 5714.39 22.32 0.00 0.00 22052.54 877.71 30957.96 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p6 : 1.05 5708.42 22.30 0.00 0.00 22027.63 889.42 30084.14 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 Malloc2p7 : 1.05 5702.38 22.27 0.00 0.00 22004.98 854.31 29335.16 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 TestPT : 1.06 5696.20 22.25 0.00 0.00 21993.50 908.92 28586.18 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 raid0 : 1.06 5689.44 22.22 0.00 0.00 21966.74 1263.91 27337.87 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 concat0 : 1.06 5682.77 22.20 0.00 0.00 21927.87 1334.13 26214.40 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 raid1 : 1.06 5674.33 22.17 0.00 0.00 21890.27 2106.51 24217.11 00:13:45.366 [2024-11-18T00:56:19.765Z] Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.366 AIO0 : 1.06 5663.59 22.12 0.00 0.00 21839.23 1357.53 23468.13 00:13:45.366 [2024-11-18T00:56:19.765Z] =================================================================================================================== 00:13:45.366 [2024-11-18T00:56:19.765Z] Total : 91463.22 357.28 0.00 0.00 22050.82 686.57 37199.48 00:13:45.625 00:13:45.625 real 0m2.650s 00:13:45.625 user 0m1.975s 00:13:45.625 sys 0m0.477s 00:13:45.625 ************************************ 00:13:45.625 END TEST bdev_write_zeroes 00:13:45.625 ************************************ 00:13:45.625 00:56:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:45.625 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:45.884 00:56:20 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.884 00:56:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:45.884 00:56:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.884 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:45.884 ************************************ 00:13:45.884 START TEST bdev_json_nonenclosed 00:13:45.884 ************************************ 00:13:45.884 00:56:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.884 [2024-11-18 00:56:20.151865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:45.884 [2024-11-18 00:56:20.152449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121764 ] 00:13:46.143 [2024-11-18 00:56:20.308686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.143 [2024-11-18 00:56:20.395470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.143 [2024-11-18 00:56:20.396045] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:46.143 [2024-11-18 00:56:20.396254] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.402 00:13:46.402 real 0m0.530s 00:13:46.402 user 0m0.267s 00:13:46.402 sys 0m0.161s 00:13:46.402 00:56:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:46.402 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:46.402 ************************************ 00:13:46.402 END TEST bdev_json_nonenclosed 00:13:46.402 ************************************ 00:13:46.402 00:56:20 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.402 00:56:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:46.402 00:56:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.402 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:46.402 ************************************ 00:13:46.402 START TEST bdev_json_nonarray 00:13:46.402 ************************************ 00:13:46.402 00:56:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.402 [2024-11-18 00:56:20.753294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:46.402 [2024-11-18 00:56:20.754666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121786 ] 00:13:46.661 [2024-11-18 00:56:20.920090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.661 [2024-11-18 00:56:20.992860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.661 [2024-11-18 00:56:20.993267] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:46.661 [2024-11-18 00:56:20.993405] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.920 00:13:46.920 real 0m0.507s 00:13:46.920 user 0m0.227s 00:13:46.920 sys 0m0.178s 00:13:46.920 00:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:46.920 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:13:46.920 ************************************ 00:13:46.920 END TEST bdev_json_nonarray 00:13:46.920 ************************************ 00:13:46.920 00:56:21 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:46.920 00:56:21 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:46.920 00:56:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:46.920 00:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.920 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:13:46.920 ************************************ 00:13:46.920 START TEST bdev_qos 00:13:46.920 ************************************ 00:13:46.920 00:56:21 -- common/autotest_common.sh@1114 -- # qos_test_suite '' 00:13:46.920 00:56:21 -- bdev/blockdev.sh@444 -- # QOS_PID=121824 00:13:46.920 00:56:21 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 121824' 00:13:46.920 Process qos testing pid: 121824 00:13:46.920 00:56:21 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:46.920 00:56:21 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:46.920 00:56:21 -- bdev/blockdev.sh@447 -- # waitforlisten 121824 00:13:46.920 00:56:21 -- common/autotest_common.sh@829 -- # '[' -z 121824 ']' 00:13:46.920 00:56:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.920 00:56:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.920 00:56:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.921 00:56:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.921 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:13:46.921 [2024-11-18 00:56:21.318914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:46.921 [2024-11-18 00:56:21.319307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121824 ] 00:13:47.179 [2024-11-18 00:56:21.468678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.179 [2024-11-18 00:56:21.549709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.116 00:56:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.116 00:56:22 -- common/autotest_common.sh@862 -- # return 0 00:13:48.116 00:56:22 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:48.116 00:56:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.116 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:48.116 Malloc_0 00:13:48.116 00:56:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.116 00:56:22 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:48.116 00:56:22 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:48.116 00:56:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:48.116 00:56:22 -- common/autotest_common.sh@899 -- # local i 00:13:48.116 00:56:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:48.116 00:56:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:48.116 00:56:22 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:48.116 00:56:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.116 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:48.116 00:56:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.116 00:56:22 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:48.116 00:56:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.116 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:48.116 [ 00:13:48.116 { 00:13:48.116 "name": "Malloc_0", 00:13:48.116 "aliases": [ 00:13:48.116 "fd95eb77-3d6a-424f-af32-f86d46802059" 00:13:48.116 ], 00:13:48.116 "product_name": "Malloc disk", 00:13:48.116 "block_size": 512, 00:13:48.116 "num_blocks": 262144, 00:13:48.116 "uuid": "fd95eb77-3d6a-424f-af32-f86d46802059", 00:13:48.116 "assigned_rate_limits": { 00:13:48.116 "rw_ios_per_sec": 0, 00:13:48.116 "rw_mbytes_per_sec": 0, 00:13:48.116 "r_mbytes_per_sec": 0, 00:13:48.116 "w_mbytes_per_sec": 0 00:13:48.116 }, 00:13:48.116 "claimed": false, 00:13:48.116 "zoned": false, 00:13:48.116 "supported_io_types": { 00:13:48.116 "read": true, 00:13:48.116 "write": true, 00:13:48.116 "unmap": true, 00:13:48.116 "write_zeroes": true, 00:13:48.116 "flush": true, 00:13:48.116 "reset": true, 00:13:48.116 "compare": false, 00:13:48.116 "compare_and_write": false, 00:13:48.116 "abort": true, 00:13:48.116 "nvme_admin": false, 00:13:48.116 "nvme_io": false 00:13:48.116 }, 00:13:48.116 "memory_domains": [ 00:13:48.116 { 00:13:48.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.116 "dma_device_type": 2 00:13:48.116 } 00:13:48.116 ], 00:13:48.116 "driver_specific": {} 00:13:48.116 } 00:13:48.116 ] 00:13:48.116 00:56:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.116 00:56:22 -- common/autotest_common.sh@905 -- # return 0 00:13:48.116 00:56:22 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:48.116 00:56:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.116 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:48.116 Null_1 00:13:48.116 00:56:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.116 00:56:22 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:48.116 00:56:22 -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:48.116 00:56:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:48.116 00:56:22 -- common/autotest_common.sh@899 -- # local i 00:13:48.116 00:56:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:48.116 00:56:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:48.116 00:56:22 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:48.116 00:56:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.116 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:48.116 00:56:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.116 00:56:22 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:48.116 00:56:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.116 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:48.116 [ 00:13:48.116 { 00:13:48.116 "name": "Null_1", 00:13:48.116 "aliases": [ 00:13:48.116 "d2f4c4bb-6ca0-4286-a064-fb3cd8786dab" 00:13:48.117 ], 00:13:48.117 "product_name": "Null disk", 00:13:48.117 "block_size": 512, 00:13:48.117 "num_blocks": 262144, 00:13:48.117 "uuid": "d2f4c4bb-6ca0-4286-a064-fb3cd8786dab", 00:13:48.117 "assigned_rate_limits": { 00:13:48.117 "rw_ios_per_sec": 0, 00:13:48.117 "rw_mbytes_per_sec": 0, 00:13:48.117 "r_mbytes_per_sec": 0, 00:13:48.117 "w_mbytes_per_sec": 0 00:13:48.117 }, 00:13:48.117 "claimed": false, 00:13:48.117 "zoned": false, 00:13:48.117 "supported_io_types": { 00:13:48.117 "read": true, 00:13:48.117 "write": true, 00:13:48.117 "unmap": false, 00:13:48.117 "write_zeroes": true, 00:13:48.117 "flush": false, 00:13:48.117 "reset": true, 00:13:48.117 "compare": false, 00:13:48.117 "compare_and_write": false, 00:13:48.117 "abort": true, 00:13:48.117 "nvme_admin": false, 00:13:48.117 "nvme_io": false 00:13:48.117 }, 00:13:48.117 "driver_specific": {} 00:13:48.117 } 00:13:48.117 ] 00:13:48.117 00:56:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.117 00:56:22 -- common/autotest_common.sh@905 -- # return 0 00:13:48.117 00:56:22 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:48.117 00:56:22 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:48.117 00:56:22 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:48.117 00:56:22 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:48.117 00:56:22 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:48.117 00:56:22 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:48.117 00:56:22 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:48.117 00:56:22 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:48.117 00:56:22 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:48.117 00:56:22 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:48.117 00:56:22 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:48.117 00:56:22 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:48.117 00:56:22 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:48.117 00:56:22 -- bdev/blockdev.sh@376 -- # tail -1 00:13:48.117 Running I/O for 60 seconds... 00:13:53.438 00:56:27 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 89368.73 357474.93 0.00 0.00 362496.00 0.00 0.00 ' 00:13:53.438 00:56:27 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:53.438 00:56:27 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:53.438 00:56:27 -- bdev/blockdev.sh@378 -- # iostat_result=89368.73 00:13:53.438 00:56:27 -- bdev/blockdev.sh@383 -- # echo 89368 00:13:53.438 00:56:27 -- bdev/blockdev.sh@414 -- # io_result=89368 00:13:53.438 00:56:27 -- bdev/blockdev.sh@416 -- # iops_limit=22000 00:13:53.438 00:56:27 -- bdev/blockdev.sh@417 -- # '[' 22000 -gt 1000 ']' 00:13:53.438 00:56:27 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 22000 Malloc_0 00:13:53.438 00:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.438 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:13:53.438 00:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.438 00:56:27 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 22000 IOPS Malloc_0 00:13:53.438 00:56:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:53.438 00:56:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.438 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:13:53.438 ************************************ 00:13:53.438 START TEST bdev_qos_iops 00:13:53.438 ************************************ 00:13:53.438 00:56:27 -- common/autotest_common.sh@1114 -- # run_qos_test 22000 IOPS Malloc_0 00:13:53.438 00:56:27 -- bdev/blockdev.sh@387 -- # local qos_limit=22000 00:13:53.438 00:56:27 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:53.438 00:56:27 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:53.438 00:56:27 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:53.438 00:56:27 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:53.438 00:56:27 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:53.438 00:56:27 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:53.438 00:56:27 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:53.438 00:56:27 -- bdev/blockdev.sh@376 -- # tail -1 00:13:58.709 00:56:32 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 21994.34 87977.34 0.00 0.00 89408.00 0.00 0.00 ' 00:13:58.709 00:56:32 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:58.709 00:56:32 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:58.709 00:56:32 -- bdev/blockdev.sh@378 -- # iostat_result=21994.34 00:13:58.709 00:56:32 -- bdev/blockdev.sh@383 -- # echo 21994 00:13:58.709 00:56:32 -- bdev/blockdev.sh@390 -- # qos_result=21994 00:13:58.709 00:56:32 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:58.709 00:56:32 -- bdev/blockdev.sh@394 -- # lower_limit=19800 00:13:58.709 00:56:32 -- bdev/blockdev.sh@395 -- # upper_limit=24200 00:13:58.709 00:56:32 -- bdev/blockdev.sh@398 -- # '[' 21994 -lt 19800 ']' 00:13:58.709 00:56:32 -- bdev/blockdev.sh@398 -- # '[' 21994 -gt 24200 ']' 00:13:58.709 00:13:58.709 real 0m5.222s 00:13:58.709 user 0m0.113s 00:13:58.709 sys 0m0.044s 00:13:58.709 00:56:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:58.709 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:13:58.709 ************************************ 00:13:58.709 END TEST bdev_qos_iops 00:13:58.709 ************************************ 00:13:58.709 00:56:32 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:58.709 00:56:32 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:58.709 00:56:32 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:58.709 00:56:32 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:58.709 00:56:32 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:58.709 00:56:32 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:58.709 00:56:32 -- bdev/blockdev.sh@376 -- # tail -1 00:14:03.985 00:56:38 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 31305.46 125221.84 0.00 0.00 126976.00 0.00 0.00 ' 00:14:03.985 00:56:38 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:03.985 00:56:38 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:03.985 00:56:38 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:03.985 00:56:38 -- bdev/blockdev.sh@380 -- # iostat_result=126976.00 00:14:03.985 00:56:38 -- bdev/blockdev.sh@383 -- # echo 126976 00:14:03.985 00:56:38 -- bdev/blockdev.sh@425 -- # bw_limit=126976 00:14:03.985 00:56:38 -- bdev/blockdev.sh@426 -- # bw_limit=12 00:14:03.985 00:56:38 -- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']' 00:14:03.985 00:56:38 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:03.985 00:56:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.985 00:56:38 -- common/autotest_common.sh@10 -- # set +x 00:14:03.985 00:56:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.985 00:56:38 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:03.985 00:56:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:03.985 00:56:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.985 00:56:38 -- common/autotest_common.sh@10 -- # set +x 00:14:03.985 ************************************ 00:14:03.985 START TEST bdev_qos_bw 00:14:03.985 ************************************ 00:14:03.985 00:56:38 -- common/autotest_common.sh@1114 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:03.985 00:56:38 -- bdev/blockdev.sh@387 -- # local qos_limit=12 00:14:03.985 00:56:38 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:03.985 00:56:38 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:03.985 00:56:38 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:03.985 00:56:38 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:03.985 00:56:38 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:03.985 00:56:38 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:03.985 00:56:38 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:03.985 00:56:38 -- bdev/blockdev.sh@376 -- # tail -1 00:14:09.334 00:56:43 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3071.64 12286.54 0.00 0.00 12472.00 0.00 0.00 ' 00:14:09.334 00:56:43 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:09.334 00:56:43 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:09.334 00:56:43 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:09.334 00:56:43 -- bdev/blockdev.sh@380 -- # iostat_result=12472.00 00:14:09.334 00:56:43 -- bdev/blockdev.sh@383 -- # echo 12472 00:14:09.334 00:56:43 -- bdev/blockdev.sh@390 -- # qos_result=12472 00:14:09.334 00:56:43 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:09.334 00:56:43 -- bdev/blockdev.sh@392 -- # qos_limit=12288 00:14:09.334 00:56:43 -- bdev/blockdev.sh@394 -- # lower_limit=11059 00:14:09.334 00:56:43 -- bdev/blockdev.sh@395 -- # upper_limit=13516 00:14:09.334 00:56:43 -- bdev/blockdev.sh@398 -- # '[' 12472 -lt 11059 ']' 00:14:09.334 00:56:43 -- bdev/blockdev.sh@398 -- # '[' 12472 -gt 13516 ']' 00:14:09.334 00:14:09.334 real 0m5.241s 00:14:09.334 user 0m0.129s 00:14:09.334 sys 0m0.030s 00:14:09.334 00:56:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:09.334 ************************************ 00:14:09.334 END TEST bdev_qos_bw 00:14:09.334 ************************************ 00:14:09.334 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.335 00:56:43 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:09.335 00:56:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.335 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.335 00:56:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.335 00:56:43 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:09.335 00:56:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:09.335 00:56:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.335 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.335 ************************************ 00:14:09.335 START TEST bdev_qos_ro_bw 00:14:09.335 ************************************ 00:14:09.335 00:56:43 -- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:09.335 00:56:43 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:09.335 00:56:43 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:09.335 00:56:43 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:09.335 00:56:43 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:09.335 00:56:43 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:09.335 00:56:43 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:09.335 00:56:43 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:09.335 00:56:43 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:09.335 00:56:43 -- bdev/blockdev.sh@376 -- # tail -1 00:14:14.612 00:56:48 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.88 2047.51 0.00 0.00 2068.00 0.00 0.00 ' 00:14:14.612 00:56:48 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:14.612 00:56:48 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:14.612 00:56:48 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:14.612 00:56:48 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:14:14.612 00:56:48 -- bdev/blockdev.sh@383 -- # echo 2068 00:14:14.612 00:56:48 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:14:14.612 00:56:48 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:14.612 00:56:48 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:14.612 00:56:48 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:14.612 00:56:48 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:14.612 00:56:48 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:14:14.612 00:56:48 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:14:14.612 00:14:14.612 real 0m5.186s 00:14:14.612 user 0m0.113s 00:14:14.612 sys 0m0.048s 00:14:14.612 00:56:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:14.612 00:56:48 -- common/autotest_common.sh@10 -- # set +x 00:14:14.612 ************************************ 00:14:14.612 END TEST bdev_qos_ro_bw 00:14:14.612 ************************************ 00:14:14.612 00:56:48 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:14.612 00:56:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.612 00:56:48 -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 00:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.179 00:56:49 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:15.179 00:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.179 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 00:14:15.179 Latency(us) 00:14:15.179 [2024-11-18T00:56:49.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.179 [2024-11-18T00:56:49.578Z] Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:15.179 Malloc_0 : 26.77 30273.27 118.25 0.00 0.00 8375.96 1997.29 503316.48 00:14:15.179 [2024-11-18T00:56:49.578Z] Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:15.179 Null_1 : 26.89 30623.87 119.62 0.00 0.00 8343.21 565.64 119837.26 00:14:15.179 [2024-11-18T00:56:49.578Z] =================================================================================================================== 00:14:15.179 [2024-11-18T00:56:49.578Z] Total : 60897.14 237.88 0.00 0.00 8359.46 565.64 503316.48 00:14:15.179 0 00:14:15.179 00:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.179 00:56:49 -- bdev/blockdev.sh@459 -- # killprocess 121824 00:14:15.179 00:56:49 -- common/autotest_common.sh@936 -- # '[' -z 121824 ']' 00:14:15.179 00:56:49 -- common/autotest_common.sh@940 -- # kill -0 121824 00:14:15.179 00:56:49 -- common/autotest_common.sh@941 -- # uname 00:14:15.179 00:56:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.179 00:56:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121824 00:14:15.179 00:56:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:15.179 killing process with pid 121824 00:14:15.179 00:56:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:15.179 00:56:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121824' 00:14:15.179 Received shutdown signal, test time was about 26.934760 seconds 00:14:15.179 00:14:15.179 Latency(us) 00:14:15.179 [2024-11-18T00:56:49.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.179 [2024-11-18T00:56:49.578Z] =================================================================================================================== 00:14:15.179 [2024-11-18T00:56:49.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.179 00:56:49 -- common/autotest_common.sh@955 -- # kill 121824 00:14:15.179 00:56:49 -- common/autotest_common.sh@960 -- # wait 121824 00:14:15.748 00:56:49 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:15.748 00:14:15.748 real 0m28.623s 00:14:15.748 user 0m29.385s 00:14:15.748 sys 0m0.784s 00:14:15.748 ************************************ 00:14:15.748 END TEST bdev_qos 00:14:15.748 ************************************ 00:14:15.748 00:56:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.748 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:14:15.748 00:56:49 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:15.748 00:56:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:15.748 00:56:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.748 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:14:15.748 ************************************ 00:14:15.748 START TEST bdev_qd_sampling 00:14:15.748 ************************************ 00:14:15.748 00:56:49 -- common/autotest_common.sh@1114 -- # qd_sampling_test_suite '' 00:14:15.748 00:56:49 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:15.748 00:56:49 -- bdev/blockdev.sh@539 -- # QD_PID=122289 00:14:15.748 Process bdev QD sampling period testing pid: 122289 00:14:15.748 00:56:49 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 122289' 00:14:15.748 00:56:49 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:15.748 00:56:49 -- bdev/blockdev.sh@542 -- # waitforlisten 122289 00:14:15.748 00:56:49 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:15.748 00:56:49 -- common/autotest_common.sh@829 -- # '[' -z 122289 ']' 00:14:15.748 00:56:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.748 00:56:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.748 00:56:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.748 00:56:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.748 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:14:15.748 [2024-11-18 00:56:50.035487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:15.748 [2024-11-18 00:56:50.035762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122289 ] 00:14:16.007 [2024-11-18 00:56:50.201180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:16.007 [2024-11-18 00:56:50.282660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.007 [2024-11-18 00:56:50.282668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.946 00:56:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.946 00:56:50 -- common/autotest_common.sh@862 -- # return 0 00:14:16.946 00:56:50 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:16.946 00:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.946 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.946 Malloc_QD 00:14:16.946 00:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.946 00:56:51 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:16.946 00:56:51 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:14:16.946 00:56:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.946 00:56:51 -- common/autotest_common.sh@899 -- # local i 00:14:16.946 00:56:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.946 00:56:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.946 00:56:51 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:16.946 00:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.946 00:56:51 -- common/autotest_common.sh@10 -- # set +x 00:14:16.946 00:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.946 00:56:51 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:16.946 00:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.946 00:56:51 -- common/autotest_common.sh@10 -- # set +x 00:14:16.946 [ 00:14:16.946 { 00:14:16.946 "name": "Malloc_QD", 00:14:16.946 "aliases": [ 00:14:16.946 "7638e647-3557-4f3a-91a8-6b453319bbad" 00:14:16.946 ], 00:14:16.946 "product_name": "Malloc disk", 00:14:16.946 "block_size": 512, 00:14:16.946 "num_blocks": 262144, 00:14:16.946 "uuid": "7638e647-3557-4f3a-91a8-6b453319bbad", 00:14:16.946 "assigned_rate_limits": { 00:14:16.946 "rw_ios_per_sec": 0, 00:14:16.946 "rw_mbytes_per_sec": 0, 00:14:16.946 "r_mbytes_per_sec": 0, 00:14:16.946 "w_mbytes_per_sec": 0 00:14:16.946 }, 00:14:16.946 "claimed": false, 00:14:16.946 "zoned": false, 00:14:16.946 "supported_io_types": { 00:14:16.946 "read": true, 00:14:16.946 "write": true, 00:14:16.946 "unmap": true, 00:14:16.946 "write_zeroes": true, 00:14:16.946 "flush": true, 00:14:16.946 "reset": true, 00:14:16.946 "compare": false, 00:14:16.946 "compare_and_write": false, 00:14:16.946 "abort": true, 00:14:16.946 "nvme_admin": false, 00:14:16.946 "nvme_io": false 00:14:16.946 }, 00:14:16.946 "memory_domains": [ 00:14:16.946 { 00:14:16.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.946 "dma_device_type": 2 00:14:16.946 } 00:14:16.946 ], 00:14:16.946 "driver_specific": {} 00:14:16.946 } 00:14:16.946 ] 00:14:16.946 00:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.946 00:56:51 -- common/autotest_common.sh@905 -- # return 0 00:14:16.946 00:56:51 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:16.946 00:56:51 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:16.946 Running I/O for 5 seconds... 00:14:18.851 00:56:53 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:18.851 00:56:53 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:18.851 00:56:53 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:18.851 00:56:53 -- bdev/blockdev.sh@519 -- # local iostats 00:14:18.851 00:56:53 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:18.851 00:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.851 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:14:18.851 00:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.851 00:56:53 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:18.851 00:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.851 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:14:18.851 00:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.851 00:56:53 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:18.851 "tick_rate": 2100000000, 00:14:18.851 "ticks": 1583818994956, 00:14:18.851 "bdevs": [ 00:14:18.851 { 00:14:18.851 "name": "Malloc_QD", 00:14:18.851 "bytes_read": 957387264, 00:14:18.851 "num_read_ops": 233731, 00:14:18.851 "bytes_written": 0, 00:14:18.851 "num_write_ops": 0, 00:14:18.851 "bytes_unmapped": 0, 00:14:18.851 "num_unmap_ops": 0, 00:14:18.851 "bytes_copied": 0, 00:14:18.851 "num_copy_ops": 0, 00:14:18.851 "read_latency_ticks": 2053305751760, 00:14:18.851 "max_read_latency_ticks": 12833710, 00:14:18.851 "min_read_latency_ticks": 404958, 00:14:18.851 "write_latency_ticks": 0, 00:14:18.851 "max_write_latency_ticks": 0, 00:14:18.851 "min_write_latency_ticks": 0, 00:14:18.851 "unmap_latency_ticks": 0, 00:14:18.851 "max_unmap_latency_ticks": 0, 00:14:18.851 "min_unmap_latency_ticks": 0, 00:14:18.851 "copy_latency_ticks": 0, 00:14:18.851 "max_copy_latency_ticks": 0, 00:14:18.851 "min_copy_latency_ticks": 0, 00:14:18.851 "io_error": {}, 00:14:18.851 "queue_depth_polling_period": 10, 00:14:18.851 "queue_depth": 512, 00:14:18.851 "io_time": 30, 00:14:18.851 "weighted_io_time": 15360 00:14:18.851 } 00:14:18.851 ] 00:14:18.851 }' 00:14:18.851 00:56:53 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:18.851 00:56:53 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:18.851 00:56:53 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:18.851 00:56:53 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:18.851 00:56:53 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:18.851 00:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.851 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:14:18.851 00:14:18.851 Latency(us) 00:14:18.851 [2024-11-18T00:56:53.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.851 [2024-11-18T00:56:53.250Z] Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:18.851 Malloc_QD : 1.98 60757.66 237.33 0.00 0.00 4204.09 1037.65 6116.69 00:14:18.851 [2024-11-18T00:56:53.250Z] Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:18.851 Malloc_QD : 1.99 61480.89 240.16 0.00 0.00 4154.79 729.48 4618.73 00:14:18.851 [2024-11-18T00:56:53.250Z] =================================================================================================================== 00:14:18.851 [2024-11-18T00:56:53.250Z] Total : 122238.55 477.49 0.00 0.00 4179.28 729.48 6116.69 00:14:18.851 0 00:14:18.851 00:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.851 00:56:53 -- bdev/blockdev.sh@552 -- # killprocess 122289 00:14:18.851 00:56:53 -- common/autotest_common.sh@936 -- # '[' -z 122289 ']' 00:14:18.851 00:56:53 -- common/autotest_common.sh@940 -- # kill -0 122289 00:14:18.851 00:56:53 -- common/autotest_common.sh@941 -- # uname 00:14:18.851 00:56:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.851 00:56:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122289 00:14:19.110 00:56:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:19.110 killing process with pid 122289 00:14:19.110 00:56:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:19.110 00:56:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122289' 00:14:19.110 Received shutdown signal, test time was about 2.057800 seconds 00:14:19.110 00:14:19.110 Latency(us) 00:14:19.110 [2024-11-18T00:56:53.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.110 [2024-11-18T00:56:53.509Z] =================================================================================================================== 00:14:19.110 [2024-11-18T00:56:53.509Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:19.110 00:56:53 -- common/autotest_common.sh@955 -- # kill 122289 00:14:19.110 00:56:53 -- common/autotest_common.sh@960 -- # wait 122289 00:14:19.369 00:56:53 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:19.369 00:14:19.369 real 0m3.731s 00:14:19.369 user 0m7.094s 00:14:19.369 sys 0m0.478s 00:14:19.369 00:56:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:19.369 ************************************ 00:14:19.369 END TEST bdev_qd_sampling 00:14:19.369 ************************************ 00:14:19.369 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:14:19.369 00:56:53 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:19.369 00:56:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.369 00:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.369 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:14:19.369 ************************************ 00:14:19.369 START TEST bdev_error 00:14:19.369 ************************************ 00:14:19.369 00:56:53 -- common/autotest_common.sh@1114 -- # error_test_suite '' 00:14:19.369 00:56:53 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:19.369 00:56:53 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:19.369 00:56:53 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:19.369 00:56:53 -- bdev/blockdev.sh@470 -- # ERR_PID=122376 00:14:19.369 Process error testing pid: 122376 00:14:19.369 00:56:53 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 122376' 00:14:19.369 00:56:53 -- bdev/blockdev.sh@472 -- # waitforlisten 122376 00:14:19.369 00:56:53 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:19.369 00:56:53 -- common/autotest_common.sh@829 -- # '[' -z 122376 ']' 00:14:19.369 00:56:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.369 00:56:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.369 00:56:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.369 00:56:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.369 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:14:19.628 [2024-11-18 00:56:53.816300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:19.628 [2024-11-18 00:56:53.816510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122376 ] 00:14:19.628 [2024-11-18 00:56:53.960732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.888 [2024-11-18 00:56:54.035304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.455 00:56:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.455 00:56:54 -- common/autotest_common.sh@862 -- # return 0 00:14:20.455 00:56:54 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:20.455 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.455 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.455 Dev_1 00:14:20.456 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.456 00:56:54 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:20.456 00:56:54 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:20.456 00:56:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.456 00:56:54 -- common/autotest_common.sh@899 -- # local i 00:14:20.456 00:56:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.456 00:56:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.456 00:56:54 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:20.456 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.456 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.456 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.456 00:56:54 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:20.456 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.456 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.456 [ 00:14:20.456 { 00:14:20.456 "name": "Dev_1", 00:14:20.456 "aliases": [ 00:14:20.456 "dabac2a2-aa42-45bf-bfd0-3a712a945254" 00:14:20.456 ], 00:14:20.456 "product_name": "Malloc disk", 00:14:20.456 "block_size": 512, 00:14:20.456 "num_blocks": 262144, 00:14:20.456 "uuid": "dabac2a2-aa42-45bf-bfd0-3a712a945254", 00:14:20.456 "assigned_rate_limits": { 00:14:20.456 "rw_ios_per_sec": 0, 00:14:20.456 "rw_mbytes_per_sec": 0, 00:14:20.456 "r_mbytes_per_sec": 0, 00:14:20.456 "w_mbytes_per_sec": 0 00:14:20.456 }, 00:14:20.456 "claimed": false, 00:14:20.456 "zoned": false, 00:14:20.456 "supported_io_types": { 00:14:20.456 "read": true, 00:14:20.456 "write": true, 00:14:20.456 "unmap": true, 00:14:20.456 "write_zeroes": true, 00:14:20.456 "flush": true, 00:14:20.456 "reset": true, 00:14:20.456 "compare": false, 00:14:20.456 "compare_and_write": false, 00:14:20.456 "abort": true, 00:14:20.456 "nvme_admin": false, 00:14:20.456 "nvme_io": false 00:14:20.456 }, 00:14:20.456 "memory_domains": [ 00:14:20.456 { 00:14:20.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.456 "dma_device_type": 2 00:14:20.456 } 00:14:20.456 ], 00:14:20.456 "driver_specific": {} 00:14:20.456 } 00:14:20.456 ] 00:14:20.456 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.456 00:56:54 -- common/autotest_common.sh@905 -- # return 0 00:14:20.456 00:56:54 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:20.456 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.456 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.456 true 00:14:20.456 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.456 00:56:54 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:20.456 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.456 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.715 Dev_2 00:14:20.715 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.715 00:56:54 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:20.715 00:56:54 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:20.715 00:56:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.715 00:56:54 -- common/autotest_common.sh@899 -- # local i 00:14:20.715 00:56:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.715 00:56:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.715 00:56:54 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:20.715 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.715 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.715 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.715 00:56:54 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:20.715 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.715 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.715 [ 00:14:20.715 { 00:14:20.715 "name": "Dev_2", 00:14:20.715 "aliases": [ 00:14:20.715 "7f82d65d-7e56-4e78-b6d5-6ae4efabda9a" 00:14:20.715 ], 00:14:20.715 "product_name": "Malloc disk", 00:14:20.715 "block_size": 512, 00:14:20.715 "num_blocks": 262144, 00:14:20.715 "uuid": "7f82d65d-7e56-4e78-b6d5-6ae4efabda9a", 00:14:20.715 "assigned_rate_limits": { 00:14:20.715 "rw_ios_per_sec": 0, 00:14:20.715 "rw_mbytes_per_sec": 0, 00:14:20.715 "r_mbytes_per_sec": 0, 00:14:20.715 "w_mbytes_per_sec": 0 00:14:20.715 }, 00:14:20.715 "claimed": false, 00:14:20.715 "zoned": false, 00:14:20.715 "supported_io_types": { 00:14:20.715 "read": true, 00:14:20.715 "write": true, 00:14:20.715 "unmap": true, 00:14:20.715 "write_zeroes": true, 00:14:20.715 "flush": true, 00:14:20.715 "reset": true, 00:14:20.715 "compare": false, 00:14:20.715 "compare_and_write": false, 00:14:20.715 "abort": true, 00:14:20.715 "nvme_admin": false, 00:14:20.715 "nvme_io": false 00:14:20.715 }, 00:14:20.715 "memory_domains": [ 00:14:20.715 { 00:14:20.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.715 "dma_device_type": 2 00:14:20.715 } 00:14:20.715 ], 00:14:20.715 "driver_specific": {} 00:14:20.715 } 00:14:20.715 ] 00:14:20.715 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.715 00:56:54 -- common/autotest_common.sh@905 -- # return 0 00:14:20.715 00:56:54 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:20.715 00:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.715 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.715 00:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.715 00:56:54 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:20.715 00:56:54 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:20.715 Running I/O for 5 seconds... 00:14:21.653 00:56:55 -- bdev/blockdev.sh@485 -- # kill -0 122376 00:14:21.653 Process is existed as continue on error is set. Pid: 122376 00:14:21.653 00:56:55 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 122376' 00:14:21.653 00:56:55 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:21.653 00:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.653 00:56:55 -- common/autotest_common.sh@10 -- # set +x 00:14:21.653 00:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.653 00:56:55 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:21.653 00:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.653 00:56:55 -- common/autotest_common.sh@10 -- # set +x 00:14:21.653 00:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.653 00:56:55 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:21.653 Timeout while waiting for response: 00:14:21.653 00:14:21.653 00:14:25.856 00:14:25.856 Latency(us) 00:14:25.856 [2024-11-18T00:57:00.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.856 [2024-11-18T00:57:00.255Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:25.856 EE_Dev_1 : 0.90 51357.43 200.61 5.54 0.00 309.27 136.53 663.16 00:14:25.856 [2024-11-18T00:57:00.255Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:25.856 Dev_2 : 5.00 111556.85 435.77 0.00 0.00 141.23 86.31 35451.86 00:14:25.856 [2024-11-18T00:57:00.255Z] =================================================================================================================== 00:14:25.856 [2024-11-18T00:57:00.255Z] Total : 162914.28 636.38 5.54 0.00 154.11 86.31 35451.86 00:14:26.794 00:57:00 -- bdev/blockdev.sh@497 -- # killprocess 122376 00:14:26.794 00:57:00 -- common/autotest_common.sh@936 -- # '[' -z 122376 ']' 00:14:26.794 00:57:00 -- common/autotest_common.sh@940 -- # kill -0 122376 00:14:26.794 00:57:00 -- common/autotest_common.sh@941 -- # uname 00:14:26.794 00:57:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:26.794 00:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122376 00:14:26.794 00:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:26.794 killing process with pid 122376 00:14:26.794 00:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:26.794 00:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122376' 00:14:26.794 00:57:01 -- common/autotest_common.sh@955 -- # kill 122376 00:14:26.794 Received shutdown signal, test time was about 5.000000 seconds 00:14:26.794 00:14:26.794 Latency(us) 00:14:26.794 [2024-11-18T00:57:01.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.794 [2024-11-18T00:57:01.193Z] =================================================================================================================== 00:14:26.794 [2024-11-18T00:57:01.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.794 00:57:01 -- common/autotest_common.sh@960 -- # wait 122376 00:14:27.362 00:57:01 -- bdev/blockdev.sh@501 -- # ERR_PID=122479 00:14:27.362 00:57:01 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:27.363 Process error testing pid: 122479 00:14:27.363 00:57:01 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 122479' 00:14:27.363 00:57:01 -- bdev/blockdev.sh@503 -- # waitforlisten 122479 00:14:27.363 00:57:01 -- common/autotest_common.sh@829 -- # '[' -z 122479 ']' 00:14:27.363 00:57:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.363 00:57:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.363 00:57:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.363 00:57:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.363 00:57:01 -- common/autotest_common.sh@10 -- # set +x 00:14:27.363 [2024-11-18 00:57:01.555867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:27.363 [2024-11-18 00:57:01.556114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122479 ] 00:14:27.363 [2024-11-18 00:57:01.711808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.621 [2024-11-18 00:57:01.779839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.189 00:57:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.189 00:57:02 -- common/autotest_common.sh@862 -- # return 0 00:14:28.189 00:57:02 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:28.189 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.189 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.189 Dev_1 00:14:28.189 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.189 00:57:02 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:28.189 00:57:02 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:28.189 00:57:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:28.189 00:57:02 -- common/autotest_common.sh@899 -- # local i 00:14:28.189 00:57:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:28.189 00:57:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:28.189 00:57:02 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:28.189 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.189 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.189 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.189 00:57:02 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:28.189 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.189 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.189 [ 00:14:28.189 { 00:14:28.189 "name": "Dev_1", 00:14:28.189 "aliases": [ 00:14:28.189 "9992c91e-b188-43f0-a222-503ab9000f58" 00:14:28.189 ], 00:14:28.189 "product_name": "Malloc disk", 00:14:28.189 "block_size": 512, 00:14:28.189 "num_blocks": 262144, 00:14:28.189 "uuid": "9992c91e-b188-43f0-a222-503ab9000f58", 00:14:28.189 "assigned_rate_limits": { 00:14:28.189 "rw_ios_per_sec": 0, 00:14:28.189 "rw_mbytes_per_sec": 0, 00:14:28.189 "r_mbytes_per_sec": 0, 00:14:28.189 "w_mbytes_per_sec": 0 00:14:28.189 }, 00:14:28.189 "claimed": false, 00:14:28.189 "zoned": false, 00:14:28.189 "supported_io_types": { 00:14:28.189 "read": true, 00:14:28.189 "write": true, 00:14:28.189 "unmap": true, 00:14:28.189 "write_zeroes": true, 00:14:28.189 "flush": true, 00:14:28.189 "reset": true, 00:14:28.189 "compare": false, 00:14:28.189 "compare_and_write": false, 00:14:28.189 "abort": true, 00:14:28.189 "nvme_admin": false, 00:14:28.189 "nvme_io": false 00:14:28.189 }, 00:14:28.189 "memory_domains": [ 00:14:28.189 { 00:14:28.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.189 "dma_device_type": 2 00:14:28.189 } 00:14:28.189 ], 00:14:28.189 "driver_specific": {} 00:14:28.189 } 00:14:28.189 ] 00:14:28.189 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.189 00:57:02 -- common/autotest_common.sh@905 -- # return 0 00:14:28.189 00:57:02 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:28.189 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.189 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.189 true 00:14:28.189 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.189 00:57:02 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:28.189 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.189 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.448 Dev_2 00:14:28.448 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.448 00:57:02 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:28.449 00:57:02 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:28.449 00:57:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:28.449 00:57:02 -- common/autotest_common.sh@899 -- # local i 00:14:28.449 00:57:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:28.449 00:57:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:28.449 00:57:02 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:28.449 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.449 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.449 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.449 00:57:02 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:28.449 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.449 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.449 [ 00:14:28.449 { 00:14:28.449 "name": "Dev_2", 00:14:28.449 "aliases": [ 00:14:28.449 "8130e927-15e8-453b-9c44-26dfc086e7fb" 00:14:28.449 ], 00:14:28.449 "product_name": "Malloc disk", 00:14:28.449 "block_size": 512, 00:14:28.449 "num_blocks": 262144, 00:14:28.449 "uuid": "8130e927-15e8-453b-9c44-26dfc086e7fb", 00:14:28.449 "assigned_rate_limits": { 00:14:28.449 "rw_ios_per_sec": 0, 00:14:28.449 "rw_mbytes_per_sec": 0, 00:14:28.449 "r_mbytes_per_sec": 0, 00:14:28.449 "w_mbytes_per_sec": 0 00:14:28.449 }, 00:14:28.449 "claimed": false, 00:14:28.449 "zoned": false, 00:14:28.449 "supported_io_types": { 00:14:28.449 "read": true, 00:14:28.449 "write": true, 00:14:28.449 "unmap": true, 00:14:28.449 "write_zeroes": true, 00:14:28.449 "flush": true, 00:14:28.449 "reset": true, 00:14:28.449 "compare": false, 00:14:28.449 "compare_and_write": false, 00:14:28.449 "abort": true, 00:14:28.449 "nvme_admin": false, 00:14:28.449 "nvme_io": false 00:14:28.449 }, 00:14:28.449 "memory_domains": [ 00:14:28.449 { 00:14:28.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.449 "dma_device_type": 2 00:14:28.449 } 00:14:28.449 ], 00:14:28.449 "driver_specific": {} 00:14:28.449 } 00:14:28.449 ] 00:14:28.449 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.449 00:57:02 -- common/autotest_common.sh@905 -- # return 0 00:14:28.449 00:57:02 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:28.449 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.449 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.449 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.449 00:57:02 -- bdev/blockdev.sh@513 -- # NOT wait 122479 00:14:28.449 00:57:02 -- common/autotest_common.sh@650 -- # local es=0 00:14:28.449 00:57:02 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 122479 00:14:28.449 00:57:02 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:28.449 00:57:02 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:28.449 00:57:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.449 00:57:02 -- common/autotest_common.sh@642 -- # type -t wait 00:14:28.449 00:57:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.449 00:57:02 -- common/autotest_common.sh@653 -- # wait 122479 00:14:28.449 Running I/O for 5 seconds... 00:14:28.449 task offset: 227368 on job bdev=EE_Dev_1 fails 00:14:28.449 00:14:28.449 Latency(us) 00:14:28.449 [2024-11-18T00:57:02.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.449 [2024-11-18T00:57:02.848Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:28.449 [2024-11-18T00:57:02.848Z] Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:28.449 EE_Dev_1 : 0.00 31294.45 122.24 7112.38 0.00 341.93 131.66 628.05 00:14:28.449 [2024-11-18T00:57:02.848Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:28.449 Dev_2 : 0.00 22160.66 86.57 0.00 0.00 496.79 133.61 905.02 00:14:28.449 [2024-11-18T00:57:02.848Z] =================================================================================================================== 00:14:28.449 [2024-11-18T00:57:02.848Z] Total : 53455.12 208.81 7112.38 0.00 425.92 131.66 905.02 00:14:28.449 [2024-11-18 00:57:02.729242] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:28.449 request: 00:14:28.449 { 00:14:28.449 "method": "perform_tests", 00:14:28.449 "req_id": 1 00:14:28.449 } 00:14:28.449 Got JSON-RPC error response 00:14:28.449 response: 00:14:28.449 { 00:14:28.449 "code": -32603, 00:14:28.449 "message": "bdevperf failed with error Operation not permitted" 00:14:28.449 } 00:14:29.017 00:57:03 -- common/autotest_common.sh@653 -- # es=255 00:14:29.017 00:57:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:29.017 00:57:03 -- common/autotest_common.sh@662 -- # es=127 00:14:29.017 00:57:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:14:29.017 00:57:03 -- common/autotest_common.sh@670 -- # es=1 00:14:29.017 00:57:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:29.017 00:14:29.017 real 0m9.518s 00:14:29.017 user 0m9.524s 00:14:29.017 sys 0m0.981s 00:14:29.017 00:57:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:29.017 ************************************ 00:14:29.017 END TEST bdev_error 00:14:29.017 ************************************ 00:14:29.017 00:57:03 -- common/autotest_common.sh@10 -- # set +x 00:14:29.017 00:57:03 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:29.017 00:57:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:29.017 00:57:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.017 00:57:03 -- common/autotest_common.sh@10 -- # set +x 00:14:29.017 ************************************ 00:14:29.017 START TEST bdev_stat 00:14:29.017 ************************************ 00:14:29.017 00:57:03 -- common/autotest_common.sh@1114 -- # stat_test_suite '' 00:14:29.017 00:57:03 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:29.017 00:57:03 -- bdev/blockdev.sh@594 -- # STAT_PID=122525 00:14:29.017 Process Bdev IO statistics testing pid: 122525 00:14:29.017 00:57:03 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 122525' 00:14:29.017 00:57:03 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:29.017 00:57:03 -- bdev/blockdev.sh@597 -- # waitforlisten 122525 00:14:29.017 00:57:03 -- common/autotest_common.sh@829 -- # '[' -z 122525 ']' 00:14:29.017 00:57:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.017 00:57:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.017 00:57:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.017 00:57:03 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:29.017 00:57:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.017 00:57:03 -- common/autotest_common.sh@10 -- # set +x 00:14:29.281 [2024-11-18 00:57:03.424661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:29.281 [2024-11-18 00:57:03.424966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122525 ] 00:14:29.281 [2024-11-18 00:57:03.587375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:29.281 [2024-11-18 00:57:03.669500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.281 [2024-11-18 00:57:03.669512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.270 00:57:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.270 00:57:04 -- common/autotest_common.sh@862 -- # return 0 00:14:30.270 00:57:04 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:30.270 00:57:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.270 00:57:04 -- common/autotest_common.sh@10 -- # set +x 00:14:30.270 Malloc_STAT 00:14:30.270 00:57:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.270 00:57:04 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:30.270 00:57:04 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:14:30.270 00:57:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:30.270 00:57:04 -- common/autotest_common.sh@899 -- # local i 00:14:30.270 00:57:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:30.270 00:57:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:30.270 00:57:04 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:30.270 00:57:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.270 00:57:04 -- common/autotest_common.sh@10 -- # set +x 00:14:30.270 00:57:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.270 00:57:04 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:30.270 00:57:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.270 00:57:04 -- common/autotest_common.sh@10 -- # set +x 00:14:30.270 [ 00:14:30.270 { 00:14:30.270 "name": "Malloc_STAT", 00:14:30.270 "aliases": [ 00:14:30.270 "3eaff8ce-a6b8-492e-bd3f-ae11b96b70cf" 00:14:30.270 ], 00:14:30.270 "product_name": "Malloc disk", 00:14:30.270 "block_size": 512, 00:14:30.270 "num_blocks": 262144, 00:14:30.270 "uuid": "3eaff8ce-a6b8-492e-bd3f-ae11b96b70cf", 00:14:30.270 "assigned_rate_limits": { 00:14:30.270 "rw_ios_per_sec": 0, 00:14:30.270 "rw_mbytes_per_sec": 0, 00:14:30.270 "r_mbytes_per_sec": 0, 00:14:30.270 "w_mbytes_per_sec": 0 00:14:30.270 }, 00:14:30.270 "claimed": false, 00:14:30.270 "zoned": false, 00:14:30.270 "supported_io_types": { 00:14:30.270 "read": true, 00:14:30.270 "write": true, 00:14:30.270 "unmap": true, 00:14:30.270 "write_zeroes": true, 00:14:30.270 "flush": true, 00:14:30.270 "reset": true, 00:14:30.270 "compare": false, 00:14:30.270 "compare_and_write": false, 00:14:30.270 "abort": true, 00:14:30.270 "nvme_admin": false, 00:14:30.270 "nvme_io": false 00:14:30.270 }, 00:14:30.270 "memory_domains": [ 00:14:30.270 { 00:14:30.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.270 "dma_device_type": 2 00:14:30.270 } 00:14:30.270 ], 00:14:30.270 "driver_specific": {} 00:14:30.270 } 00:14:30.270 ] 00:14:30.270 00:57:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.270 00:57:04 -- common/autotest_common.sh@905 -- # return 0 00:14:30.270 00:57:04 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:30.270 00:57:04 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:30.270 Running I/O for 10 seconds... 00:14:32.167 00:57:06 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:32.167 00:57:06 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:32.167 00:57:06 -- bdev/blockdev.sh@558 -- # local iostats 00:14:32.167 00:57:06 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:32.167 00:57:06 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:32.167 00:57:06 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:32.167 00:57:06 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:32.167 00:57:06 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:32.167 00:57:06 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:32.167 00:57:06 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:32.167 00:57:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.167 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.167 00:57:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.167 00:57:06 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:32.167 "tick_rate": 2100000000, 00:14:32.167 "ticks": 1611863263160, 00:14:32.167 "bdevs": [ 00:14:32.167 { 00:14:32.167 "name": "Malloc_STAT", 00:14:32.167 "bytes_read": 960532992, 00:14:32.167 "num_read_ops": 234499, 00:14:32.167 "bytes_written": 0, 00:14:32.167 "num_write_ops": 0, 00:14:32.167 "bytes_unmapped": 0, 00:14:32.167 "num_unmap_ops": 0, 00:14:32.167 "bytes_copied": 0, 00:14:32.167 "num_copy_ops": 0, 00:14:32.167 "read_latency_ticks": 2043486194380, 00:14:32.167 "max_read_latency_ticks": 11752858, 00:14:32.167 "min_read_latency_ticks": 420330, 00:14:32.167 "write_latency_ticks": 0, 00:14:32.167 "max_write_latency_ticks": 0, 00:14:32.167 "min_write_latency_ticks": 0, 00:14:32.167 "unmap_latency_ticks": 0, 00:14:32.167 "max_unmap_latency_ticks": 0, 00:14:32.167 "min_unmap_latency_ticks": 0, 00:14:32.167 "copy_latency_ticks": 0, 00:14:32.167 "max_copy_latency_ticks": 0, 00:14:32.167 "min_copy_latency_ticks": 0, 00:14:32.167 "io_error": {} 00:14:32.167 } 00:14:32.167 ] 00:14:32.167 }' 00:14:32.167 00:57:06 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:32.167 00:57:06 -- bdev/blockdev.sh@567 -- # io_count1=234499 00:14:32.167 00:57:06 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:32.167 00:57:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.167 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.167 00:57:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.167 00:57:06 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:32.167 "tick_rate": 2100000000, 00:14:32.167 "ticks": 1611984663444, 00:14:32.167 "name": "Malloc_STAT", 00:14:32.167 "channels": [ 00:14:32.167 { 00:14:32.167 "thread_id": 2, 00:14:32.167 "bytes_read": 491782144, 00:14:32.167 "num_read_ops": 120064, 00:14:32.167 "bytes_written": 0, 00:14:32.167 "num_write_ops": 0, 00:14:32.167 "bytes_unmapped": 0, 00:14:32.167 "num_unmap_ops": 0, 00:14:32.167 "bytes_copied": 0, 00:14:32.167 "num_copy_ops": 0, 00:14:32.167 "read_latency_ticks": 1051423884590, 00:14:32.167 "max_read_latency_ticks": 12378794, 00:14:32.167 "min_read_latency_ticks": 6874064, 00:14:32.167 "write_latency_ticks": 0, 00:14:32.167 "max_write_latency_ticks": 0, 00:14:32.167 "min_write_latency_ticks": 0, 00:14:32.167 "unmap_latency_ticks": 0, 00:14:32.167 "max_unmap_latency_ticks": 0, 00:14:32.167 "min_unmap_latency_ticks": 0, 00:14:32.167 "copy_latency_ticks": 0, 00:14:32.167 "max_copy_latency_ticks": 0, 00:14:32.167 "min_copy_latency_ticks": 0 00:14:32.167 }, 00:14:32.167 { 00:14:32.167 "thread_id": 3, 00:14:32.167 "bytes_read": 497025024, 00:14:32.167 "num_read_ops": 121344, 00:14:32.167 "bytes_written": 0, 00:14:32.167 "num_write_ops": 0, 00:14:32.167 "bytes_unmapped": 0, 00:14:32.167 "num_unmap_ops": 0, 00:14:32.167 "bytes_copied": 0, 00:14:32.167 "num_copy_ops": 0, 00:14:32.167 "read_latency_ticks": 1052728550586, 00:14:32.167 "max_read_latency_ticks": 9558764, 00:14:32.167 "min_read_latency_ticks": 5901584, 00:14:32.167 "write_latency_ticks": 0, 00:14:32.168 "max_write_latency_ticks": 0, 00:14:32.168 "min_write_latency_ticks": 0, 00:14:32.168 "unmap_latency_ticks": 0, 00:14:32.168 "max_unmap_latency_ticks": 0, 00:14:32.168 "min_unmap_latency_ticks": 0, 00:14:32.168 "copy_latency_ticks": 0, 00:14:32.168 "max_copy_latency_ticks": 0, 00:14:32.168 "min_copy_latency_ticks": 0 00:14:32.168 } 00:14:32.168 ] 00:14:32.168 }' 00:14:32.168 00:57:06 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:32.427 00:57:06 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=120064 00:14:32.427 00:57:06 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=120064 00:14:32.427 00:57:06 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:32.427 00:57:06 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=121344 00:14:32.427 00:57:06 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=241408 00:14:32.427 00:57:06 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:32.427 00:57:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.427 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.427 00:57:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.427 00:57:06 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:32.427 "tick_rate": 2100000000, 00:14:32.427 "ticks": 1612215409426, 00:14:32.427 "bdevs": [ 00:14:32.427 { 00:14:32.427 "name": "Malloc_STAT", 00:14:32.427 "bytes_read": 1043370496, 00:14:32.427 "num_read_ops": 254723, 00:14:32.427 "bytes_written": 0, 00:14:32.427 "num_write_ops": 0, 00:14:32.427 "bytes_unmapped": 0, 00:14:32.427 "num_unmap_ops": 0, 00:14:32.427 "bytes_copied": 0, 00:14:32.427 "num_copy_ops": 0, 00:14:32.427 "read_latency_ticks": 2222034029338, 00:14:32.427 "max_read_latency_ticks": 12926666, 00:14:32.427 "min_read_latency_ticks": 420330, 00:14:32.427 "write_latency_ticks": 0, 00:14:32.427 "max_write_latency_ticks": 0, 00:14:32.427 "min_write_latency_ticks": 0, 00:14:32.427 "unmap_latency_ticks": 0, 00:14:32.427 "max_unmap_latency_ticks": 0, 00:14:32.427 "min_unmap_latency_ticks": 0, 00:14:32.427 "copy_latency_ticks": 0, 00:14:32.427 "max_copy_latency_ticks": 0, 00:14:32.427 "min_copy_latency_ticks": 0, 00:14:32.427 "io_error": {} 00:14:32.427 } 00:14:32.427 ] 00:14:32.427 }' 00:14:32.427 00:57:06 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:32.427 00:57:06 -- bdev/blockdev.sh@576 -- # io_count2=254723 00:14:32.427 00:57:06 -- bdev/blockdev.sh@581 -- # '[' 241408 -lt 234499 ']' 00:14:32.427 00:57:06 -- bdev/blockdev.sh@581 -- # '[' 241408 -gt 254723 ']' 00:14:32.427 00:57:06 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:32.427 00:57:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.427 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.427 00:14:32.427 Latency(us) 00:14:32.427 [2024-11-18T00:57:06.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.427 [2024-11-18T00:57:06.826Z] Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:32.427 Malloc_STAT : 2.14 61006.29 238.31 0.00 0.00 4187.01 1006.45 6428.77 00:14:32.427 [2024-11-18T00:57:06.826Z] Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:32.427 Malloc_STAT : 2.14 62051.52 242.39 0.00 0.00 4116.74 670.96 4618.73 00:14:32.427 [2024-11-18T00:57:06.826Z] =================================================================================================================== 00:14:32.427 [2024-11-18T00:57:06.826Z] Total : 123057.80 480.69 0.00 0.00 4151.57 670.96 6428.77 00:14:32.427 0 00:14:32.427 00:57:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.427 00:57:06 -- bdev/blockdev.sh@607 -- # killprocess 122525 00:14:32.427 00:57:06 -- common/autotest_common.sh@936 -- # '[' -z 122525 ']' 00:14:32.427 00:57:06 -- common/autotest_common.sh@940 -- # kill -0 122525 00:14:32.427 00:57:06 -- common/autotest_common.sh@941 -- # uname 00:14:32.427 00:57:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:32.427 00:57:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122525 00:14:32.427 00:57:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:32.427 00:57:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:32.427 killing process with pid 122525 00:14:32.427 00:57:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122525' 00:14:32.427 Received shutdown signal, test time was about 2.208712 seconds 00:14:32.427 00:14:32.427 Latency(us) 00:14:32.427 [2024-11-18T00:57:06.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.427 [2024-11-18T00:57:06.826Z] =================================================================================================================== 00:14:32.427 [2024-11-18T00:57:06.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.427 00:57:06 -- common/autotest_common.sh@955 -- # kill 122525 00:14:32.427 00:57:06 -- common/autotest_common.sh@960 -- # wait 122525 00:14:32.995 00:57:07 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:32.995 00:14:32.995 real 0m3.858s 00:14:32.995 user 0m7.391s 00:14:32.995 sys 0m0.536s 00:14:32.995 00:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:32.995 ************************************ 00:14:32.995 END TEST bdev_stat 00:14:32.995 ************************************ 00:14:32.995 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:14:32.995 00:57:07 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:32.995 00:57:07 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:32.995 00:57:07 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:32.995 00:57:07 -- bdev/blockdev.sh@809 -- # cleanup 00:14:32.995 00:57:07 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:32.995 00:57:07 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:32.995 00:57:07 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:32.995 00:57:07 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:32.995 00:57:07 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:32.995 00:57:07 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:32.995 00:14:32.995 real 1m58.870s 00:14:32.995 user 5m10.084s 00:14:32.995 sys 0m24.617s 00:14:32.995 00:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:32.995 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:14:32.995 ************************************ 00:14:32.995 END TEST blockdev_general 00:14:32.995 ************************************ 00:14:32.995 00:57:07 -- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:32.995 00:57:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:32.995 00:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.995 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:14:32.995 ************************************ 00:14:32.996 START TEST bdev_raid 00:14:32.996 ************************************ 00:14:32.996 00:57:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:33.255 * Looking for test storage... 00:14:33.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:33.255 00:57:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:33.255 00:57:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:33.255 00:57:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:33.255 00:57:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:33.255 00:57:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:33.255 00:57:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:33.255 00:57:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:33.255 00:57:07 -- scripts/common.sh@335 -- # IFS=.-: 00:14:33.255 00:57:07 -- scripts/common.sh@335 -- # read -ra ver1 00:14:33.255 00:57:07 -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.255 00:57:07 -- scripts/common.sh@336 -- # read -ra ver2 00:14:33.255 00:57:07 -- scripts/common.sh@337 -- # local 'op=<' 00:14:33.255 00:57:07 -- scripts/common.sh@339 -- # ver1_l=2 00:14:33.255 00:57:07 -- scripts/common.sh@340 -- # ver2_l=1 00:14:33.255 00:57:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:33.255 00:57:07 -- scripts/common.sh@343 -- # case "$op" in 00:14:33.255 00:57:07 -- scripts/common.sh@344 -- # : 1 00:14:33.255 00:57:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:33.255 00:57:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.255 00:57:07 -- scripts/common.sh@364 -- # decimal 1 00:14:33.255 00:57:07 -- scripts/common.sh@352 -- # local d=1 00:14:33.255 00:57:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.255 00:57:07 -- scripts/common.sh@354 -- # echo 1 00:14:33.255 00:57:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:33.255 00:57:07 -- scripts/common.sh@365 -- # decimal 2 00:14:33.255 00:57:07 -- scripts/common.sh@352 -- # local d=2 00:14:33.255 00:57:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.255 00:57:07 -- scripts/common.sh@354 -- # echo 2 00:14:33.255 00:57:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:33.255 00:57:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:33.255 00:57:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:33.255 00:57:07 -- scripts/common.sh@367 -- # return 0 00:14:33.255 00:57:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.255 00:57:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:33.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.255 --rc genhtml_branch_coverage=1 00:14:33.255 --rc genhtml_function_coverage=1 00:14:33.255 --rc genhtml_legend=1 00:14:33.255 --rc geninfo_all_blocks=1 00:14:33.255 --rc geninfo_unexecuted_blocks=1 00:14:33.255 00:14:33.255 ' 00:14:33.255 00:57:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:33.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.255 --rc genhtml_branch_coverage=1 00:14:33.255 --rc genhtml_function_coverage=1 00:14:33.255 --rc genhtml_legend=1 00:14:33.255 --rc geninfo_all_blocks=1 00:14:33.255 --rc geninfo_unexecuted_blocks=1 00:14:33.255 00:14:33.255 ' 00:14:33.255 00:57:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:33.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.255 --rc genhtml_branch_coverage=1 00:14:33.255 --rc genhtml_function_coverage=1 00:14:33.255 --rc genhtml_legend=1 00:14:33.255 --rc geninfo_all_blocks=1 00:14:33.255 --rc geninfo_unexecuted_blocks=1 00:14:33.255 00:14:33.255 ' 00:14:33.255 00:57:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:33.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.255 --rc genhtml_branch_coverage=1 00:14:33.255 --rc genhtml_function_coverage=1 00:14:33.255 --rc genhtml_legend=1 00:14:33.255 --rc geninfo_all_blocks=1 00:14:33.255 --rc geninfo_unexecuted_blocks=1 00:14:33.255 00:14:33.255 ' 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:33.255 00:57:07 -- bdev/nbd_common.sh@6 -- # set -e 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:33.255 00:57:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:33.255 00:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.255 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:14:33.255 ************************************ 00:14:33.255 START TEST raid_function_test_raid0 00:14:33.255 ************************************ 00:14:33.255 00:57:07 -- common/autotest_common.sh@1114 -- # raid_function_test raid0 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@86 -- # raid_pid=122677 00:14:33.255 Process raid pid: 122677 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122677' 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122677 /var/tmp/spdk-raid.sock 00:14:33.255 00:57:07 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:33.255 00:57:07 -- common/autotest_common.sh@829 -- # '[' -z 122677 ']' 00:14:33.255 00:57:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:33.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:33.255 00:57:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.255 00:57:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:33.255 00:57:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.255 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:14:33.514 [2024-11-18 00:57:07.657186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:33.514 [2024-11-18 00:57:07.658313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.514 [2024-11-18 00:57:07.820965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.514 [2024-11-18 00:57:07.893434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.772 [2024-11-18 00:57:07.971046] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.340 00:57:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.340 00:57:08 -- common/autotest_common.sh@862 -- # return 0 00:14:34.340 00:57:08 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:34.340 00:57:08 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:34.340 00:57:08 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:34.340 00:57:08 -- bdev/bdev_raid.sh@70 -- # cat 00:14:34.340 00:57:08 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:34.599 [2024-11-18 00:57:08.903093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:34.599 [2024-11-18 00:57:08.906358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:34.599 [2024-11-18 00:57:08.906416] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:34.599 [2024-11-18 00:57:08.906426] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:34.599 [2024-11-18 00:57:08.906585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:34.599 [2024-11-18 00:57:08.907006] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:34.599 [2024-11-18 00:57:08.907024] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:34.599 [2024-11-18 00:57:08.907227] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.599 Base_1 00:14:34.599 Base_2 00:14:34.599 00:57:08 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:34.599 00:57:08 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:34.599 00:57:08 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:34.858 00:57:09 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:34.858 00:57:09 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:34.858 00:57:09 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@12 -- # local i 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.858 00:57:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:35.116 [2024-11-18 00:57:09.263371] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:35.116 /dev/nbd0 00:14:35.116 00:57:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.116 00:57:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.116 00:57:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:35.116 00:57:09 -- common/autotest_common.sh@867 -- # local i 00:14:35.116 00:57:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:35.116 00:57:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:35.116 00:57:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:35.116 00:57:09 -- common/autotest_common.sh@871 -- # break 00:14:35.116 00:57:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:35.116 00:57:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:35.116 00:57:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.116 1+0 records in 00:14:35.116 1+0 records out 00:14:35.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235948 s, 17.4 MB/s 00:14:35.116 00:57:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.116 00:57:09 -- common/autotest_common.sh@884 -- # size=4096 00:14:35.116 00:57:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.116 00:57:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:35.116 00:57:09 -- common/autotest_common.sh@887 -- # return 0 00:14:35.116 00:57:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.116 00:57:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.116 00:57:09 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:35.116 00:57:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.116 00:57:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:35.375 { 00:14:35.375 "nbd_device": "/dev/nbd0", 00:14:35.375 "bdev_name": "raid" 00:14:35.375 } 00:14:35.375 ]' 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:35.375 { 00:14:35.375 "nbd_device": "/dev/nbd0", 00:14:35.375 "bdev_name": "raid" 00:14:35.375 } 00:14:35.375 ]' 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@65 -- # count=1 00:14:35.375 00:57:09 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:35.375 4096+0 records in 00:14:35.375 4096+0 records out 00:14:35.375 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0315 s, 66.6 MB/s 00:14:35.375 00:57:09 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:35.634 4096+0 records in 00:14:35.634 4096+0 records out 00:14:35.634 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.250236 s, 8.4 MB/s 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:35.634 128+0 records in 00:14:35.634 128+0 records out 00:14:35.634 65536 bytes (66 kB, 64 KiB) copied, 0.000589752 s, 111 MB/s 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:35.634 2035+0 records in 00:14:35.634 2035+0 records out 00:14:35.634 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00759509 s, 137 MB/s 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:35.634 00:57:09 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:35.634 456+0 records in 00:14:35.634 456+0 records out 00:14:35.634 233472 bytes (233 kB, 228 KiB) copied, 0.00246089 s, 94.9 MB/s 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:35.634 00:57:10 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:35.634 00:57:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.634 00:57:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:35.634 00:57:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.634 00:57:10 -- bdev/nbd_common.sh@51 -- # local i 00:14:35.634 00:57:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.634 00:57:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:35.893 [2024-11-18 00:57:10.219474] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@41 -- # break 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.893 00:57:10 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.893 00:57:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:36.151 00:57:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:36.151 00:57:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:36.151 00:57:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:36.151 00:57:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:36.151 00:57:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:36.151 00:57:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:36.410 00:57:10 -- bdev/nbd_common.sh@65 -- # true 00:14:36.410 00:57:10 -- bdev/nbd_common.sh@65 -- # count=0 00:14:36.410 00:57:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:36.410 00:57:10 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:36.410 00:57:10 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:36.410 00:57:10 -- bdev/bdev_raid.sh@111 -- # killprocess 122677 00:14:36.410 00:57:10 -- common/autotest_common.sh@936 -- # '[' -z 122677 ']' 00:14:36.410 00:57:10 -- common/autotest_common.sh@940 -- # kill -0 122677 00:14:36.410 00:57:10 -- common/autotest_common.sh@941 -- # uname 00:14:36.410 00:57:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:36.410 00:57:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122677 00:14:36.410 00:57:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:36.410 killing process with pid 122677 00:14:36.410 00:57:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:36.410 00:57:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122677' 00:14:36.410 00:57:10 -- common/autotest_common.sh@955 -- # kill 122677 00:14:36.410 [2024-11-18 00:57:10.585341] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.410 00:57:10 -- common/autotest_common.sh@960 -- # wait 122677 00:14:36.410 [2024-11-18 00:57:10.585500] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.410 [2024-11-18 00:57:10.585565] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.410 [2024-11-18 00:57:10.585580] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:36.410 [2024-11-18 00:57:10.625858] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.669 00:57:11 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:36.669 00:14:36.669 real 0m3.432s 00:14:36.669 user 0m4.419s 00:14:36.669 sys 0m1.125s 00:14:36.669 00:57:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:36.669 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.669 ************************************ 00:14:36.669 END TEST raid_function_test_raid0 00:14:36.669 ************************************ 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:36.928 00:57:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:36.928 00:57:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.928 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.928 ************************************ 00:14:36.928 START TEST raid_function_test_concat 00:14:36.928 ************************************ 00:14:36.928 00:57:11 -- common/autotest_common.sh@1114 -- # raid_function_test concat 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@86 -- # raid_pid=122823 00:14:36.928 Process raid pid: 122823 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122823' 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122823 /var/tmp/spdk-raid.sock 00:14:36.928 00:57:11 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:36.928 00:57:11 -- common/autotest_common.sh@829 -- # '[' -z 122823 ']' 00:14:36.928 00:57:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:36.928 00:57:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:36.928 00:57:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:36.928 00:57:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.928 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.928 [2024-11-18 00:57:11.157691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:36.928 [2024-11-18 00:57:11.157940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.928 [2024-11-18 00:57:11.313244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.186 [2024-11-18 00:57:11.387681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.186 [2024-11-18 00:57:11.465476] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.755 00:57:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.755 00:57:12 -- common/autotest_common.sh@862 -- # return 0 00:14:37.755 00:57:12 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:37.755 00:57:12 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:37.755 00:57:12 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:37.755 00:57:12 -- bdev/bdev_raid.sh@70 -- # cat 00:14:37.755 00:57:12 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:38.014 [2024-11-18 00:57:12.366772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:38.014 [2024-11-18 00:57:12.369459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:38.014 [2024-11-18 00:57:12.369536] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:38.014 [2024-11-18 00:57:12.369548] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:38.014 [2024-11-18 00:57:12.369741] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:38.014 [2024-11-18 00:57:12.370233] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:38.014 [2024-11-18 00:57:12.370253] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:38.014 [2024-11-18 00:57:12.370437] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.014 Base_1 00:14:38.014 Base_2 00:14:38.014 00:57:12 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:38.014 00:57:12 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:38.014 00:57:12 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:38.273 00:57:12 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:38.273 00:57:12 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:38.273 00:57:12 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@12 -- # local i 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.273 00:57:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:38.532 [2024-11-18 00:57:12.814880] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:38.532 /dev/nbd0 00:14:38.532 00:57:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.532 00:57:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.532 00:57:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:38.532 00:57:12 -- common/autotest_common.sh@867 -- # local i 00:14:38.532 00:57:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:38.532 00:57:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:38.532 00:57:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:38.532 00:57:12 -- common/autotest_common.sh@871 -- # break 00:14:38.532 00:57:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:38.532 00:57:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:38.532 00:57:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.532 1+0 records in 00:14:38.532 1+0 records out 00:14:38.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345234 s, 11.9 MB/s 00:14:38.532 00:57:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.532 00:57:12 -- common/autotest_common.sh@884 -- # size=4096 00:14:38.532 00:57:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.532 00:57:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:38.532 00:57:12 -- common/autotest_common.sh@887 -- # return 0 00:14:38.532 00:57:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.532 00:57:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.532 00:57:12 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:38.532 00:57:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:38.532 00:57:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:38.790 { 00:14:38.790 "nbd_device": "/dev/nbd0", 00:14:38.790 "bdev_name": "raid" 00:14:38.790 } 00:14:38.790 ]' 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:38.790 { 00:14:38.790 "nbd_device": "/dev/nbd0", 00:14:38.790 "bdev_name": "raid" 00:14:38.790 } 00:14:38.790 ]' 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@65 -- # count=1 00:14:38.790 00:57:13 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:38.790 00:57:13 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:39.049 4096+0 records in 00:14:39.049 4096+0 records out 00:14:39.049 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0263526 s, 79.6 MB/s 00:14:39.049 00:57:13 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:39.307 4096+0 records in 00:14:39.307 4096+0 records out 00:14:39.307 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.251075 s, 8.4 MB/s 00:14:39.307 00:57:13 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:39.307 00:57:13 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:39.307 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:39.307 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:39.307 00:57:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:39.307 00:57:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:39.307 00:57:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:39.307 128+0 records in 00:14:39.307 128+0 records out 00:14:39.308 65536 bytes (66 kB, 64 KiB) copied, 0.00124144 s, 52.8 MB/s 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:39.308 2035+0 records in 00:14:39.308 2035+0 records out 00:14:39.308 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0101273 s, 103 MB/s 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:39.308 456+0 records in 00:14:39.308 456+0 records out 00:14:39.308 233472 bytes (233 kB, 228 KiB) copied, 0.00275278 s, 84.8 MB/s 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:39.308 00:57:13 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:39.308 00:57:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:39.308 00:57:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.308 00:57:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.308 00:57:13 -- bdev/nbd_common.sh@51 -- # local i 00:14:39.308 00:57:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.308 00:57:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.566 [2024-11-18 00:57:13.839591] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@41 -- # break 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.566 00:57:13 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:39.566 00:57:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@65 -- # true 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@65 -- # count=0 00:14:39.826 00:57:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:39.826 00:57:14 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:39.826 00:57:14 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:39.826 00:57:14 -- bdev/bdev_raid.sh@111 -- # killprocess 122823 00:14:39.826 00:57:14 -- common/autotest_common.sh@936 -- # '[' -z 122823 ']' 00:14:39.826 00:57:14 -- common/autotest_common.sh@940 -- # kill -0 122823 00:14:39.826 00:57:14 -- common/autotest_common.sh@941 -- # uname 00:14:39.826 00:57:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.826 00:57:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122823 00:14:39.826 00:57:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:39.826 00:57:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:39.826 00:57:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122823' 00:14:39.826 killing process with pid 122823 00:14:39.826 00:57:14 -- common/autotest_common.sh@955 -- # kill 122823 00:14:39.826 [2024-11-18 00:57:14.197162] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.826 [2024-11-18 00:57:14.197301] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.826 [2024-11-18 00:57:14.197383] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.826 [2024-11-18 00:57:14.197393] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:39.826 00:57:14 -- common/autotest_common.sh@960 -- # wait 122823 00:14:40.085 [2024-11-18 00:57:14.237160] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:40.344 00:14:40.344 real 0m3.542s 00:14:40.344 user 0m4.554s 00:14:40.344 sys 0m1.217s 00:14:40.344 00:57:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:40.344 00:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:40.344 ************************************ 00:14:40.344 END TEST raid_function_test_concat 00:14:40.344 ************************************ 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:40.344 00:57:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:40.344 00:57:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.344 00:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:40.344 ************************************ 00:14:40.344 START TEST raid0_resize_test 00:14:40.344 ************************************ 00:14:40.344 00:57:14 -- common/autotest_common.sh@1114 -- # raid0_resize_test 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@301 -- # raid_pid=122973 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 122973' 00:14:40.344 Process raid pid: 122973 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:40.344 00:57:14 -- bdev/bdev_raid.sh@303 -- # waitforlisten 122973 /var/tmp/spdk-raid.sock 00:14:40.344 00:57:14 -- common/autotest_common.sh@829 -- # '[' -z 122973 ']' 00:14:40.344 00:57:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.344 00:57:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.344 00:57:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.344 00:57:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.344 00:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:40.603 [2024-11-18 00:57:14.769164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:40.603 [2024-11-18 00:57:14.769442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.603 [2024-11-18 00:57:14.924619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.603 [2024-11-18 00:57:14.995035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.862 [2024-11-18 00:57:15.072321] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.432 00:57:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.432 00:57:15 -- common/autotest_common.sh@862 -- # return 0 00:14:41.432 00:57:15 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:41.690 Base_1 00:14:41.690 00:57:15 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:41.690 Base_2 00:14:41.690 00:57:16 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:41.950 [2024-11-18 00:57:16.279675] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:41.950 [2024-11-18 00:57:16.282087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:41.950 [2024-11-18 00:57:16.282158] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:41.950 [2024-11-18 00:57:16.282168] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:41.950 [2024-11-18 00:57:16.282360] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0 00:14:41.950 [2024-11-18 00:57:16.282769] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:41.950 [2024-11-18 00:57:16.282787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080 00:14:41.950 [2024-11-18 00:57:16.282957] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.950 00:57:16 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:42.210 [2024-11-18 00:57:16.455635] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:42.210 [2024-11-18 00:57:16.455666] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:42.210 true 00:14:42.210 00:57:16 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:42.210 00:57:16 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:42.469 [2024-11-18 00:57:16.683802] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.469 00:57:16 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:42.469 00:57:16 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:42.469 00:57:16 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:42.469 00:57:16 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:42.469 [2024-11-18 00:57:16.863642] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:42.469 [2024-11-18 00:57:16.863684] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:42.469 [2024-11-18 00:57:16.863724] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:42.469 [2024-11-18 00:57:16.863800] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:42.469 true 00:14:42.729 00:57:16 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:42.729 00:57:16 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:42.729 [2024-11-18 00:57:17.039806] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.729 00:57:17 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:42.729 00:57:17 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:42.729 00:57:17 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:42.729 00:57:17 -- bdev/bdev_raid.sh@332 -- # killprocess 122973 00:14:42.729 00:57:17 -- common/autotest_common.sh@936 -- # '[' -z 122973 ']' 00:14:42.729 00:57:17 -- common/autotest_common.sh@940 -- # kill -0 122973 00:14:42.729 00:57:17 -- common/autotest_common.sh@941 -- # uname 00:14:42.729 00:57:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:42.729 00:57:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122973 00:14:42.729 00:57:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:42.729 00:57:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:42.729 00:57:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122973' 00:14:42.729 killing process with pid 122973 00:14:42.729 00:57:17 -- common/autotest_common.sh@955 -- # kill 122973 00:14:42.729 [2024-11-18 00:57:17.089896] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.729 [2024-11-18 00:57:17.090008] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.729 [2024-11-18 00:57:17.090064] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.729 [2024-11-18 00:57:17.090074] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline 00:14:42.729 00:57:17 -- common/autotest_common.sh@960 -- # wait 122973 00:14:42.729 [2024-11-18 00:57:17.090723] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:43.298 00:14:43.298 real 0m2.784s 00:14:43.298 user 0m3.933s 00:14:43.298 sys 0m0.653s 00:14:43.298 00:57:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:43.298 00:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:43.298 ************************************ 00:14:43.298 END TEST raid0_resize_test 00:14:43.298 ************************************ 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:43.298 00:57:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:43.298 00:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:43.298 00:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:43.298 ************************************ 00:14:43.298 START TEST raid_state_function_test 00:14:43.298 ************************************ 00:14:43.298 00:57:17 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=123055 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123055' 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:43.298 Process raid pid: 123055 00:14:43.298 00:57:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123055 /var/tmp/spdk-raid.sock 00:14:43.298 00:57:17 -- common/autotest_common.sh@829 -- # '[' -z 123055 ']' 00:14:43.298 00:57:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.298 00:57:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.298 00:57:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.298 00:57:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.298 00:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:43.298 [2024-11-18 00:57:17.620002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:43.298 [2024-11-18 00:57:17.620203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.557 [2024-11-18 00:57:17.763609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.557 [2024-11-18 00:57:17.834519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.557 [2024-11-18 00:57:17.911903] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.495 00:57:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.495 00:57:18 -- common/autotest_common.sh@862 -- # return 0 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:44.495 [2024-11-18 00:57:18.823214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.495 [2024-11-18 00:57:18.823308] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.495 [2024-11-18 00:57:18.823319] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.495 [2024-11-18 00:57:18.823338] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.495 00:57:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.754 00:57:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.754 "name": "Existed_Raid", 00:14:44.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.754 "strip_size_kb": 64, 00:14:44.754 "state": "configuring", 00:14:44.754 "raid_level": "raid0", 00:14:44.754 "superblock": false, 00:14:44.754 "num_base_bdevs": 2, 00:14:44.754 "num_base_bdevs_discovered": 0, 00:14:44.754 "num_base_bdevs_operational": 2, 00:14:44.754 "base_bdevs_list": [ 00:14:44.754 { 00:14:44.754 "name": "BaseBdev1", 00:14:44.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.754 "is_configured": false, 00:14:44.754 "data_offset": 0, 00:14:44.754 "data_size": 0 00:14:44.754 }, 00:14:44.754 { 00:14:44.754 "name": "BaseBdev2", 00:14:44.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.754 "is_configured": false, 00:14:44.754 "data_offset": 0, 00:14:44.754 "data_size": 0 00:14:44.754 } 00:14:44.754 ] 00:14:44.754 }' 00:14:44.754 00:57:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.754 00:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.330 00:57:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:45.589 [2024-11-18 00:57:19.795227] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.589 [2024-11-18 00:57:19.795287] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:45.589 00:57:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.849 [2024-11-18 00:57:20.047343] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.849 [2024-11-18 00:57:20.047448] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.849 [2024-11-18 00:57:20.047459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.849 [2024-11-18 00:57:20.047485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.849 00:57:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.849 [2024-11-18 00:57:20.230934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.849 BaseBdev1 00:14:45.849 00:57:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:45.849 00:57:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:45.849 00:57:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.108 00:57:20 -- common/autotest_common.sh@899 -- # local i 00:14:46.108 00:57:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.108 00:57:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.108 00:57:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.108 00:57:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.367 [ 00:14:46.367 { 00:14:46.367 "name": "BaseBdev1", 00:14:46.367 "aliases": [ 00:14:46.367 "48ae5a42-b27f-4ccf-9f35-49c6729d6234" 00:14:46.367 ], 00:14:46.367 "product_name": "Malloc disk", 00:14:46.367 "block_size": 512, 00:14:46.367 "num_blocks": 65536, 00:14:46.367 "uuid": "48ae5a42-b27f-4ccf-9f35-49c6729d6234", 00:14:46.367 "assigned_rate_limits": { 00:14:46.367 "rw_ios_per_sec": 0, 00:14:46.367 "rw_mbytes_per_sec": 0, 00:14:46.367 "r_mbytes_per_sec": 0, 00:14:46.367 "w_mbytes_per_sec": 0 00:14:46.367 }, 00:14:46.367 "claimed": true, 00:14:46.367 "claim_type": "exclusive_write", 00:14:46.367 "zoned": false, 00:14:46.367 "supported_io_types": { 00:14:46.367 "read": true, 00:14:46.367 "write": true, 00:14:46.367 "unmap": true, 00:14:46.367 "write_zeroes": true, 00:14:46.367 "flush": true, 00:14:46.367 "reset": true, 00:14:46.367 "compare": false, 00:14:46.367 "compare_and_write": false, 00:14:46.367 "abort": true, 00:14:46.367 "nvme_admin": false, 00:14:46.367 "nvme_io": false 00:14:46.367 }, 00:14:46.367 "memory_domains": [ 00:14:46.367 { 00:14:46.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.367 "dma_device_type": 2 00:14:46.367 } 00:14:46.367 ], 00:14:46.367 "driver_specific": {} 00:14:46.367 } 00:14:46.367 ] 00:14:46.367 00:57:20 -- common/autotest_common.sh@905 -- # return 0 00:14:46.367 00:57:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:46.367 00:57:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.368 00:57:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.627 00:57:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.627 "name": "Existed_Raid", 00:14:46.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.627 "strip_size_kb": 64, 00:14:46.627 "state": "configuring", 00:14:46.627 "raid_level": "raid0", 00:14:46.627 "superblock": false, 00:14:46.627 "num_base_bdevs": 2, 00:14:46.627 "num_base_bdevs_discovered": 1, 00:14:46.627 "num_base_bdevs_operational": 2, 00:14:46.627 "base_bdevs_list": [ 00:14:46.627 { 00:14:46.627 "name": "BaseBdev1", 00:14:46.627 "uuid": "48ae5a42-b27f-4ccf-9f35-49c6729d6234", 00:14:46.627 "is_configured": true, 00:14:46.627 "data_offset": 0, 00:14:46.627 "data_size": 65536 00:14:46.627 }, 00:14:46.627 { 00:14:46.627 "name": "BaseBdev2", 00:14:46.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.627 "is_configured": false, 00:14:46.627 "data_offset": 0, 00:14:46.627 "data_size": 0 00:14:46.627 } 00:14:46.627 ] 00:14:46.627 }' 00:14:46.627 00:57:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.627 00:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:47.196 00:57:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:47.455 [2024-11-18 00:57:21.651203] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.455 [2024-11-18 00:57:21.651292] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:47.455 00:57:21 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:47.455 00:57:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:47.714 [2024-11-18 00:57:21.911336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.714 [2024-11-18 00:57:21.913766] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.714 [2024-11-18 00:57:21.913846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.714 00:57:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.974 00:57:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.974 "name": "Existed_Raid", 00:14:47.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.974 "strip_size_kb": 64, 00:14:47.974 "state": "configuring", 00:14:47.974 "raid_level": "raid0", 00:14:47.974 "superblock": false, 00:14:47.974 "num_base_bdevs": 2, 00:14:47.974 "num_base_bdevs_discovered": 1, 00:14:47.974 "num_base_bdevs_operational": 2, 00:14:47.974 "base_bdevs_list": [ 00:14:47.974 { 00:14:47.974 "name": "BaseBdev1", 00:14:47.974 "uuid": "48ae5a42-b27f-4ccf-9f35-49c6729d6234", 00:14:47.974 "is_configured": true, 00:14:47.974 "data_offset": 0, 00:14:47.974 "data_size": 65536 00:14:47.974 }, 00:14:47.974 { 00:14:47.974 "name": "BaseBdev2", 00:14:47.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.974 "is_configured": false, 00:14:47.974 "data_offset": 0, 00:14:47.974 "data_size": 0 00:14:47.974 } 00:14:47.974 ] 00:14:47.974 }' 00:14:47.974 00:57:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.974 00:57:22 -- common/autotest_common.sh@10 -- # set +x 00:14:48.542 00:57:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.801 [2024-11-18 00:57:22.979611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.801 [2024-11-18 00:57:22.979679] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:48.801 [2024-11-18 00:57:22.979693] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.801 [2024-11-18 00:57:22.979890] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:48.801 [2024-11-18 00:57:22.980482] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:48.801 [2024-11-18 00:57:22.980510] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:48.801 [2024-11-18 00:57:22.980875] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.801 BaseBdev2 00:14:48.801 00:57:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:48.801 00:57:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:48.801 00:57:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.801 00:57:23 -- common/autotest_common.sh@899 -- # local i 00:14:48.801 00:57:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.801 00:57:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.801 00:57:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.801 00:57:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.060 [ 00:14:49.060 { 00:14:49.060 "name": "BaseBdev2", 00:14:49.060 "aliases": [ 00:14:49.060 "0676ec8b-13c9-4ff8-8d7b-44dbd6261760" 00:14:49.060 ], 00:14:49.060 "product_name": "Malloc disk", 00:14:49.060 "block_size": 512, 00:14:49.060 "num_blocks": 65536, 00:14:49.060 "uuid": "0676ec8b-13c9-4ff8-8d7b-44dbd6261760", 00:14:49.060 "assigned_rate_limits": { 00:14:49.060 "rw_ios_per_sec": 0, 00:14:49.060 "rw_mbytes_per_sec": 0, 00:14:49.060 "r_mbytes_per_sec": 0, 00:14:49.060 "w_mbytes_per_sec": 0 00:14:49.060 }, 00:14:49.060 "claimed": true, 00:14:49.060 "claim_type": "exclusive_write", 00:14:49.060 "zoned": false, 00:14:49.060 "supported_io_types": { 00:14:49.060 "read": true, 00:14:49.060 "write": true, 00:14:49.060 "unmap": true, 00:14:49.060 "write_zeroes": true, 00:14:49.060 "flush": true, 00:14:49.060 "reset": true, 00:14:49.060 "compare": false, 00:14:49.060 "compare_and_write": false, 00:14:49.060 "abort": true, 00:14:49.060 "nvme_admin": false, 00:14:49.060 "nvme_io": false 00:14:49.060 }, 00:14:49.060 "memory_domains": [ 00:14:49.060 { 00:14:49.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.060 "dma_device_type": 2 00:14:49.060 } 00:14:49.060 ], 00:14:49.060 "driver_specific": {} 00:14:49.060 } 00:14:49.060 ] 00:14:49.060 00:57:23 -- common/autotest_common.sh@905 -- # return 0 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.060 00:57:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.330 00:57:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.331 "name": "Existed_Raid", 00:14:49.331 "uuid": "710cc2f2-a7ed-4ad3-b647-9905fc2faa5a", 00:14:49.331 "strip_size_kb": 64, 00:14:49.331 "state": "online", 00:14:49.331 "raid_level": "raid0", 00:14:49.331 "superblock": false, 00:14:49.331 "num_base_bdevs": 2, 00:14:49.331 "num_base_bdevs_discovered": 2, 00:14:49.331 "num_base_bdevs_operational": 2, 00:14:49.331 "base_bdevs_list": [ 00:14:49.331 { 00:14:49.331 "name": "BaseBdev1", 00:14:49.331 "uuid": "48ae5a42-b27f-4ccf-9f35-49c6729d6234", 00:14:49.331 "is_configured": true, 00:14:49.331 "data_offset": 0, 00:14:49.331 "data_size": 65536 00:14:49.331 }, 00:14:49.331 { 00:14:49.331 "name": "BaseBdev2", 00:14:49.331 "uuid": "0676ec8b-13c9-4ff8-8d7b-44dbd6261760", 00:14:49.331 "is_configured": true, 00:14:49.331 "data_offset": 0, 00:14:49.331 "data_size": 65536 00:14:49.331 } 00:14:49.331 ] 00:14:49.331 }' 00:14:49.331 00:57:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.331 00:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:49.915 00:57:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.173 [2024-11-18 00:57:24.368042] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.173 [2024-11-18 00:57:24.368085] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.173 [2024-11-18 00:57:24.368192] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.173 00:57:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.432 00:57:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.432 "name": "Existed_Raid", 00:14:50.432 "uuid": "710cc2f2-a7ed-4ad3-b647-9905fc2faa5a", 00:14:50.432 "strip_size_kb": 64, 00:14:50.432 "state": "offline", 00:14:50.432 "raid_level": "raid0", 00:14:50.432 "superblock": false, 00:14:50.432 "num_base_bdevs": 2, 00:14:50.432 "num_base_bdevs_discovered": 1, 00:14:50.432 "num_base_bdevs_operational": 1, 00:14:50.432 "base_bdevs_list": [ 00:14:50.432 { 00:14:50.432 "name": null, 00:14:50.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.432 "is_configured": false, 00:14:50.432 "data_offset": 0, 00:14:50.432 "data_size": 65536 00:14:50.432 }, 00:14:50.432 { 00:14:50.432 "name": "BaseBdev2", 00:14:50.432 "uuid": "0676ec8b-13c9-4ff8-8d7b-44dbd6261760", 00:14:50.432 "is_configured": true, 00:14:50.432 "data_offset": 0, 00:14:50.432 "data_size": 65536 00:14:50.432 } 00:14:50.432 ] 00:14:50.432 }' 00:14:50.432 00:57:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.432 00:57:24 -- common/autotest_common.sh@10 -- # set +x 00:14:50.999 00:57:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:50.999 00:57:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:50.999 00:57:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:50.999 00:57:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.259 00:57:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:51.259 00:57:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.259 00:57:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:51.518 [2024-11-18 00:57:25.773022] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.518 [2024-11-18 00:57:25.773123] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:51.518 00:57:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:51.518 00:57:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:51.518 00:57:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.518 00:57:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.778 00:57:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:51.778 00:57:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:51.778 00:57:26 -- bdev/bdev_raid.sh@287 -- # killprocess 123055 00:14:51.778 00:57:26 -- common/autotest_common.sh@936 -- # '[' -z 123055 ']' 00:14:51.778 00:57:26 -- common/autotest_common.sh@940 -- # kill -0 123055 00:14:51.778 00:57:26 -- common/autotest_common.sh@941 -- # uname 00:14:51.778 00:57:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.778 00:57:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123055 00:14:51.778 killing process with pid 123055 00:14:51.778 00:57:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.778 00:57:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.778 00:57:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123055' 00:14:51.778 00:57:26 -- common/autotest_common.sh@955 -- # kill 123055 00:14:51.778 00:57:26 -- common/autotest_common.sh@960 -- # wait 123055 00:14:51.778 [2024-11-18 00:57:26.096446] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.778 [2024-11-18 00:57:26.096544] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.346 ************************************ 00:14:52.346 END TEST raid_state_function_test 00:14:52.346 ************************************ 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:52.346 00:14:52.346 real 0m8.923s 00:14:52.346 user 0m15.573s 00:14:52.346 sys 0m1.551s 00:14:52.346 00:57:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.346 00:57:26 -- common/autotest_common.sh@10 -- # set +x 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:52.346 00:57:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:52.346 00:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.346 00:57:26 -- common/autotest_common.sh@10 -- # set +x 00:14:52.346 ************************************ 00:14:52.346 START TEST raid_state_function_test_sb 00:14:52.346 ************************************ 00:14:52.346 00:57:26 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=123357 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:52.346 Process raid pid: 123357 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123357' 00:14:52.346 00:57:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123357 /var/tmp/spdk-raid.sock 00:14:52.346 00:57:26 -- common/autotest_common.sh@829 -- # '[' -z 123357 ']' 00:14:52.346 00:57:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.346 00:57:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.346 00:57:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.346 00:57:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.346 00:57:26 -- common/autotest_common.sh@10 -- # set +x 00:14:52.346 [2024-11-18 00:57:26.627428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:52.346 [2024-11-18 00:57:26.627716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.606 [2024-11-18 00:57:26.784532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.606 [2024-11-18 00:57:26.856363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.606 [2024-11-18 00:57:26.934022] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.173 00:57:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.173 00:57:27 -- common/autotest_common.sh@862 -- # return 0 00:14:53.173 00:57:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:53.432 [2024-11-18 00:57:27.725275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.432 [2024-11-18 00:57:27.725373] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.432 [2024-11-18 00:57:27.725384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.432 [2024-11-18 00:57:27.725404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.432 00:57:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.692 00:57:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.692 "name": "Existed_Raid", 00:14:53.692 "uuid": "e4773e62-a26f-4063-b04a-1b8f92ff0889", 00:14:53.692 "strip_size_kb": 64, 00:14:53.692 "state": "configuring", 00:14:53.692 "raid_level": "raid0", 00:14:53.692 "superblock": true, 00:14:53.692 "num_base_bdevs": 2, 00:14:53.692 "num_base_bdevs_discovered": 0, 00:14:53.692 "num_base_bdevs_operational": 2, 00:14:53.692 "base_bdevs_list": [ 00:14:53.692 { 00:14:53.692 "name": "BaseBdev1", 00:14:53.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.692 "is_configured": false, 00:14:53.692 "data_offset": 0, 00:14:53.692 "data_size": 0 00:14:53.692 }, 00:14:53.692 { 00:14:53.692 "name": "BaseBdev2", 00:14:53.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.692 "is_configured": false, 00:14:53.692 "data_offset": 0, 00:14:53.692 "data_size": 0 00:14:53.692 } 00:14:53.692 ] 00:14:53.692 }' 00:14:53.692 00:57:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.692 00:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.259 00:57:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:54.518 [2024-11-18 00:57:28.677267] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.518 [2024-11-18 00:57:28.677308] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:54.518 00:57:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:54.777 [2024-11-18 00:57:28.921402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.777 [2024-11-18 00:57:28.921501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.777 [2024-11-18 00:57:28.921513] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.777 [2024-11-18 00:57:28.921539] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.777 00:57:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.035 [2024-11-18 00:57:29.193221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.035 BaseBdev1 00:14:55.035 00:57:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:55.035 00:57:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:55.035 00:57:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:55.035 00:57:29 -- common/autotest_common.sh@899 -- # local i 00:14:55.035 00:57:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:55.035 00:57:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:55.035 00:57:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.035 00:57:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.293 [ 00:14:55.293 { 00:14:55.293 "name": "BaseBdev1", 00:14:55.293 "aliases": [ 00:14:55.293 "4610373d-bdb3-4932-a2f0-bd19fb99d074" 00:14:55.293 ], 00:14:55.293 "product_name": "Malloc disk", 00:14:55.293 "block_size": 512, 00:14:55.293 "num_blocks": 65536, 00:14:55.293 "uuid": "4610373d-bdb3-4932-a2f0-bd19fb99d074", 00:14:55.293 "assigned_rate_limits": { 00:14:55.293 "rw_ios_per_sec": 0, 00:14:55.293 "rw_mbytes_per_sec": 0, 00:14:55.293 "r_mbytes_per_sec": 0, 00:14:55.293 "w_mbytes_per_sec": 0 00:14:55.293 }, 00:14:55.293 "claimed": true, 00:14:55.293 "claim_type": "exclusive_write", 00:14:55.293 "zoned": false, 00:14:55.293 "supported_io_types": { 00:14:55.293 "read": true, 00:14:55.293 "write": true, 00:14:55.293 "unmap": true, 00:14:55.293 "write_zeroes": true, 00:14:55.293 "flush": true, 00:14:55.293 "reset": true, 00:14:55.293 "compare": false, 00:14:55.293 "compare_and_write": false, 00:14:55.293 "abort": true, 00:14:55.293 "nvme_admin": false, 00:14:55.293 "nvme_io": false 00:14:55.293 }, 00:14:55.293 "memory_domains": [ 00:14:55.293 { 00:14:55.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.293 "dma_device_type": 2 00:14:55.293 } 00:14:55.293 ], 00:14:55.293 "driver_specific": {} 00:14:55.293 } 00:14:55.293 ] 00:14:55.293 00:57:29 -- common/autotest_common.sh@905 -- # return 0 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.293 00:57:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.553 00:57:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.553 "name": "Existed_Raid", 00:14:55.553 "uuid": "f0f93ba0-eb66-48ae-8b87-31b3e5fdc02a", 00:14:55.553 "strip_size_kb": 64, 00:14:55.553 "state": "configuring", 00:14:55.553 "raid_level": "raid0", 00:14:55.553 "superblock": true, 00:14:55.553 "num_base_bdevs": 2, 00:14:55.553 "num_base_bdevs_discovered": 1, 00:14:55.553 "num_base_bdevs_operational": 2, 00:14:55.553 "base_bdevs_list": [ 00:14:55.553 { 00:14:55.553 "name": "BaseBdev1", 00:14:55.553 "uuid": "4610373d-bdb3-4932-a2f0-bd19fb99d074", 00:14:55.553 "is_configured": true, 00:14:55.553 "data_offset": 2048, 00:14:55.553 "data_size": 63488 00:14:55.553 }, 00:14:55.553 { 00:14:55.553 "name": "BaseBdev2", 00:14:55.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.553 "is_configured": false, 00:14:55.553 "data_offset": 0, 00:14:55.553 "data_size": 0 00:14:55.553 } 00:14:55.553 ] 00:14:55.553 }' 00:14:55.553 00:57:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.553 00:57:29 -- common/autotest_common.sh@10 -- # set +x 00:14:56.121 00:57:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:56.380 [2024-11-18 00:57:30.585494] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.380 [2024-11-18 00:57:30.585573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:56.380 00:57:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:56.380 00:57:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:56.639 00:57:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.898 BaseBdev1 00:14:56.898 00:57:31 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:56.898 00:57:31 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:56.898 00:57:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:56.898 00:57:31 -- common/autotest_common.sh@899 -- # local i 00:14:56.898 00:57:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:56.898 00:57:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:56.898 00:57:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.898 00:57:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.157 [ 00:14:57.157 { 00:14:57.157 "name": "BaseBdev1", 00:14:57.157 "aliases": [ 00:14:57.157 "4655bb5d-f609-4d6d-94a3-d4593a637d8c" 00:14:57.157 ], 00:14:57.157 "product_name": "Malloc disk", 00:14:57.157 "block_size": 512, 00:14:57.157 "num_blocks": 65536, 00:14:57.157 "uuid": "4655bb5d-f609-4d6d-94a3-d4593a637d8c", 00:14:57.157 "assigned_rate_limits": { 00:14:57.157 "rw_ios_per_sec": 0, 00:14:57.157 "rw_mbytes_per_sec": 0, 00:14:57.157 "r_mbytes_per_sec": 0, 00:14:57.157 "w_mbytes_per_sec": 0 00:14:57.157 }, 00:14:57.157 "claimed": false, 00:14:57.157 "zoned": false, 00:14:57.157 "supported_io_types": { 00:14:57.157 "read": true, 00:14:57.157 "write": true, 00:14:57.157 "unmap": true, 00:14:57.157 "write_zeroes": true, 00:14:57.157 "flush": true, 00:14:57.157 "reset": true, 00:14:57.157 "compare": false, 00:14:57.157 "compare_and_write": false, 00:14:57.157 "abort": true, 00:14:57.157 "nvme_admin": false, 00:14:57.157 "nvme_io": false 00:14:57.157 }, 00:14:57.157 "memory_domains": [ 00:14:57.157 { 00:14:57.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.157 "dma_device_type": 2 00:14:57.157 } 00:14:57.157 ], 00:14:57.157 "driver_specific": {} 00:14:57.157 } 00:14:57.157 ] 00:14:57.158 00:57:31 -- common/autotest_common.sh@905 -- # return 0 00:14:57.158 00:57:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:57.416 [2024-11-18 00:57:31.609802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.416 [2024-11-18 00:57:31.612253] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.416 [2024-11-18 00:57:31.612322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.416 00:57:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.417 00:57:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.417 00:57:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.417 00:57:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.417 00:57:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.417 00:57:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.675 00:57:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.675 "name": "Existed_Raid", 00:14:57.675 "uuid": "ae1f8bfc-d5e8-497a-9532-5f2d2ccb7ae1", 00:14:57.675 "strip_size_kb": 64, 00:14:57.675 "state": "configuring", 00:14:57.675 "raid_level": "raid0", 00:14:57.675 "superblock": true, 00:14:57.675 "num_base_bdevs": 2, 00:14:57.675 "num_base_bdevs_discovered": 1, 00:14:57.675 "num_base_bdevs_operational": 2, 00:14:57.675 "base_bdevs_list": [ 00:14:57.675 { 00:14:57.675 "name": "BaseBdev1", 00:14:57.675 "uuid": "4655bb5d-f609-4d6d-94a3-d4593a637d8c", 00:14:57.675 "is_configured": true, 00:14:57.675 "data_offset": 2048, 00:14:57.675 "data_size": 63488 00:14:57.675 }, 00:14:57.676 { 00:14:57.676 "name": "BaseBdev2", 00:14:57.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.676 "is_configured": false, 00:14:57.676 "data_offset": 0, 00:14:57.676 "data_size": 0 00:14:57.676 } 00:14:57.676 ] 00:14:57.676 }' 00:14:57.676 00:57:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.676 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.242 00:57:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.499 [2024-11-18 00:57:32.755528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.499 [2024-11-18 00:57:32.755809] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:14:58.499 [2024-11-18 00:57:32.755827] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.499 [2024-11-18 00:57:32.755981] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:58.499 [2024-11-18 00:57:32.756504] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:14:58.499 [2024-11-18 00:57:32.756530] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:14:58.499 [2024-11-18 00:57:32.756728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.499 BaseBdev2 00:14:58.499 00:57:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:58.499 00:57:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:58.499 00:57:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:58.499 00:57:32 -- common/autotest_common.sh@899 -- # local i 00:14:58.499 00:57:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:58.499 00:57:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:58.499 00:57:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.757 00:57:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.016 [ 00:14:59.016 { 00:14:59.016 "name": "BaseBdev2", 00:14:59.016 "aliases": [ 00:14:59.016 "6a7f7789-170b-46fc-a5e3-215d77b66819" 00:14:59.016 ], 00:14:59.016 "product_name": "Malloc disk", 00:14:59.016 "block_size": 512, 00:14:59.016 "num_blocks": 65536, 00:14:59.016 "uuid": "6a7f7789-170b-46fc-a5e3-215d77b66819", 00:14:59.016 "assigned_rate_limits": { 00:14:59.016 "rw_ios_per_sec": 0, 00:14:59.016 "rw_mbytes_per_sec": 0, 00:14:59.016 "r_mbytes_per_sec": 0, 00:14:59.016 "w_mbytes_per_sec": 0 00:14:59.016 }, 00:14:59.016 "claimed": true, 00:14:59.016 "claim_type": "exclusive_write", 00:14:59.016 "zoned": false, 00:14:59.016 "supported_io_types": { 00:14:59.016 "read": true, 00:14:59.016 "write": true, 00:14:59.016 "unmap": true, 00:14:59.016 "write_zeroes": true, 00:14:59.016 "flush": true, 00:14:59.016 "reset": true, 00:14:59.016 "compare": false, 00:14:59.016 "compare_and_write": false, 00:14:59.016 "abort": true, 00:14:59.016 "nvme_admin": false, 00:14:59.016 "nvme_io": false 00:14:59.016 }, 00:14:59.016 "memory_domains": [ 00:14:59.016 { 00:14:59.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.016 "dma_device_type": 2 00:14:59.016 } 00:14:59.016 ], 00:14:59.016 "driver_specific": {} 00:14:59.016 } 00:14:59.016 ] 00:14:59.016 00:57:33 -- common/autotest_common.sh@905 -- # return 0 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.016 00:57:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.275 00:57:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.275 "name": "Existed_Raid", 00:14:59.275 "uuid": "ae1f8bfc-d5e8-497a-9532-5f2d2ccb7ae1", 00:14:59.275 "strip_size_kb": 64, 00:14:59.275 "state": "online", 00:14:59.275 "raid_level": "raid0", 00:14:59.275 "superblock": true, 00:14:59.275 "num_base_bdevs": 2, 00:14:59.275 "num_base_bdevs_discovered": 2, 00:14:59.275 "num_base_bdevs_operational": 2, 00:14:59.275 "base_bdevs_list": [ 00:14:59.275 { 00:14:59.275 "name": "BaseBdev1", 00:14:59.275 "uuid": "4655bb5d-f609-4d6d-94a3-d4593a637d8c", 00:14:59.275 "is_configured": true, 00:14:59.275 "data_offset": 2048, 00:14:59.275 "data_size": 63488 00:14:59.275 }, 00:14:59.275 { 00:14:59.275 "name": "BaseBdev2", 00:14:59.275 "uuid": "6a7f7789-170b-46fc-a5e3-215d77b66819", 00:14:59.275 "is_configured": true, 00:14:59.275 "data_offset": 2048, 00:14:59.275 "data_size": 63488 00:14:59.275 } 00:14:59.275 ] 00:14:59.275 }' 00:14:59.275 00:57:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.275 00:57:33 -- common/autotest_common.sh@10 -- # set +x 00:14:59.843 00:57:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:00.102 [2024-11-18 00:57:34.387974] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.102 [2024-11-18 00:57:34.388016] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.102 [2024-11-18 00:57:34.388114] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.102 00:57:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.361 00:57:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.361 "name": "Existed_Raid", 00:15:00.361 "uuid": "ae1f8bfc-d5e8-497a-9532-5f2d2ccb7ae1", 00:15:00.361 "strip_size_kb": 64, 00:15:00.361 "state": "offline", 00:15:00.361 "raid_level": "raid0", 00:15:00.361 "superblock": true, 00:15:00.361 "num_base_bdevs": 2, 00:15:00.361 "num_base_bdevs_discovered": 1, 00:15:00.361 "num_base_bdevs_operational": 1, 00:15:00.361 "base_bdevs_list": [ 00:15:00.361 { 00:15:00.361 "name": null, 00:15:00.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.361 "is_configured": false, 00:15:00.361 "data_offset": 2048, 00:15:00.361 "data_size": 63488 00:15:00.361 }, 00:15:00.361 { 00:15:00.361 "name": "BaseBdev2", 00:15:00.361 "uuid": "6a7f7789-170b-46fc-a5e3-215d77b66819", 00:15:00.361 "is_configured": true, 00:15:00.361 "data_offset": 2048, 00:15:00.361 "data_size": 63488 00:15:00.361 } 00:15:00.361 ] 00:15:00.361 }' 00:15:00.361 00:57:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.361 00:57:34 -- common/autotest_common.sh@10 -- # set +x 00:15:01.296 00:57:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:01.296 00:57:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:01.296 00:57:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.296 00:57:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:01.296 00:57:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:01.296 00:57:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:01.296 00:57:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:01.555 [2024-11-18 00:57:35.827041] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.555 [2024-11-18 00:57:35.827137] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:01.555 00:57:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:01.555 00:57:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:01.555 00:57:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:01.555 00:57:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.814 00:57:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:01.814 00:57:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:01.814 00:57:36 -- bdev/bdev_raid.sh@287 -- # killprocess 123357 00:15:01.814 00:57:36 -- common/autotest_common.sh@936 -- # '[' -z 123357 ']' 00:15:01.814 00:57:36 -- common/autotest_common.sh@940 -- # kill -0 123357 00:15:01.814 00:57:36 -- common/autotest_common.sh@941 -- # uname 00:15:01.814 00:57:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.814 00:57:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123357 00:15:01.814 00:57:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:01.814 killing process with pid 123357 00:15:01.814 00:57:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:01.814 00:57:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123357' 00:15:01.814 00:57:36 -- common/autotest_common.sh@955 -- # kill 123357 00:15:01.814 [2024-11-18 00:57:36.140820] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.814 [2024-11-18 00:57:36.140908] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.814 00:57:36 -- common/autotest_common.sh@960 -- # wait 123357 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:02.382 ************************************ 00:15:02.382 END TEST raid_state_function_test_sb 00:15:02.382 ************************************ 00:15:02.382 00:15:02.382 real 0m9.980s 00:15:02.382 user 0m17.463s 00:15:02.382 sys 0m1.741s 00:15:02.382 00:57:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:02.382 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:02.382 00:57:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:02.382 00:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.382 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:15:02.382 ************************************ 00:15:02.382 START TEST raid_superblock_test 00:15:02.382 ************************************ 00:15:02.382 00:57:36 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=123681 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123681 /var/tmp/spdk-raid.sock 00:15:02.382 00:57:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:02.382 00:57:36 -- common/autotest_common.sh@829 -- # '[' -z 123681 ']' 00:15:02.382 00:57:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:02.382 00:57:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.383 00:57:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:02.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:02.383 00:57:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.383 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:15:02.383 [2024-11-18 00:57:36.681524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:02.383 [2024-11-18 00:57:36.681819] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123681 ] 00:15:02.642 [2024-11-18 00:57:36.836521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.642 [2024-11-18 00:57:36.906611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.642 [2024-11-18 00:57:36.984322] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.209 00:57:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.209 00:57:37 -- common/autotest_common.sh@862 -- # return 0 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.209 00:57:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:03.468 malloc1 00:15:03.468 00:57:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.727 [2024-11-18 00:57:37.975572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.727 [2024-11-18 00:57:37.975691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.727 [2024-11-18 00:57:37.975743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:03.727 [2024-11-18 00:57:37.975790] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.727 [2024-11-18 00:57:37.978753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.727 [2024-11-18 00:57:37.978813] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.727 pt1 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.727 00:57:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:03.986 malloc2 00:15:03.986 00:57:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:04.245 [2024-11-18 00:57:38.391091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:04.245 [2024-11-18 00:57:38.391183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.245 [2024-11-18 00:57:38.391223] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:04.245 [2024-11-18 00:57:38.391270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.245 [2024-11-18 00:57:38.393952] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.245 [2024-11-18 00:57:38.394004] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:04.245 pt2 00:15:04.245 00:57:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:04.245 00:57:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:04.245 00:57:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:04.246 [2024-11-18 00:57:38.571218] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.246 [2024-11-18 00:57:38.573690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.246 [2024-11-18 00:57:38.573906] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:15:04.246 [2024-11-18 00:57:38.573918] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:04.246 [2024-11-18 00:57:38.574089] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:04.246 [2024-11-18 00:57:38.574516] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:15:04.246 [2024-11-18 00:57:38.574536] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:15:04.246 [2024-11-18 00:57:38.574701] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.246 00:57:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.505 00:57:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.505 "name": "raid_bdev1", 00:15:04.505 "uuid": "f881b710-88ad-4b3b-934e-6c5e8e2f4e1a", 00:15:04.505 "strip_size_kb": 64, 00:15:04.505 "state": "online", 00:15:04.505 "raid_level": "raid0", 00:15:04.505 "superblock": true, 00:15:04.505 "num_base_bdevs": 2, 00:15:04.505 "num_base_bdevs_discovered": 2, 00:15:04.505 "num_base_bdevs_operational": 2, 00:15:04.505 "base_bdevs_list": [ 00:15:04.505 { 00:15:04.505 "name": "pt1", 00:15:04.505 "uuid": "aeda8ec8-0a18-529c-936d-999b90a306c3", 00:15:04.505 "is_configured": true, 00:15:04.505 "data_offset": 2048, 00:15:04.505 "data_size": 63488 00:15:04.505 }, 00:15:04.505 { 00:15:04.505 "name": "pt2", 00:15:04.505 "uuid": "b721471e-e311-5e8d-a15c-3991b2a0dab2", 00:15:04.505 "is_configured": true, 00:15:04.505 "data_offset": 2048, 00:15:04.505 "data_size": 63488 00:15:04.505 } 00:15:04.505 ] 00:15:04.505 }' 00:15:04.505 00:57:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.505 00:57:38 -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 00:57:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:05.073 00:57:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:05.335 [2024-11-18 00:57:39.615501] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.335 00:57:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f881b710-88ad-4b3b-934e-6c5e8e2f4e1a 00:15:05.335 00:57:39 -- bdev/bdev_raid.sh@380 -- # '[' -z f881b710-88ad-4b3b-934e-6c5e8e2f4e1a ']' 00:15:05.335 00:57:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:05.633 [2024-11-18 00:57:39.807397] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.633 [2024-11-18 00:57:39.807448] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.633 [2024-11-18 00:57:39.807568] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.633 [2024-11-18 00:57:39.807638] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.633 [2024-11-18 00:57:39.807649] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:15:05.633 00:57:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.633 00:57:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:05.908 00:57:40 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:05.908 00:57:40 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:05.908 00:57:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.908 00:57:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:05.908 00:57:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.908 00:57:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:06.184 00:57:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:06.184 00:57:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:06.444 00:57:40 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:06.444 00:57:40 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:06.444 00:57:40 -- common/autotest_common.sh@650 -- # local es=0 00:15:06.444 00:57:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:06.444 00:57:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.444 00:57:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.444 00:57:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.444 00:57:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.444 00:57:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.444 00:57:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.444 00:57:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.444 00:57:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:06.444 00:57:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:06.704 [2024-11-18 00:57:40.911574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:06.704 [2024-11-18 00:57:40.914074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:06.704 [2024-11-18 00:57:40.914169] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:06.704 [2024-11-18 00:57:40.914250] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:06.704 [2024-11-18 00:57:40.914288] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.704 [2024-11-18 00:57:40.914299] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:06.704 request: 00:15:06.704 { 00:15:06.704 "name": "raid_bdev1", 00:15:06.704 "raid_level": "raid0", 00:15:06.704 "base_bdevs": [ 00:15:06.704 "malloc1", 00:15:06.704 "malloc2" 00:15:06.704 ], 00:15:06.704 "superblock": false, 00:15:06.704 "strip_size_kb": 64, 00:15:06.704 "method": "bdev_raid_create", 00:15:06.704 "req_id": 1 00:15:06.704 } 00:15:06.704 Got JSON-RPC error response 00:15:06.704 response: 00:15:06.704 { 00:15:06.704 "code": -17, 00:15:06.704 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:06.704 } 00:15:06.704 00:57:40 -- common/autotest_common.sh@653 -- # es=1 00:15:06.704 00:57:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:06.704 00:57:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:06.704 00:57:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:06.704 00:57:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:06.704 00:57:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.963 00:57:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:06.963 00:57:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:06.963 00:57:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.223 [2024-11-18 00:57:41.407576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.223 [2024-11-18 00:57:41.407692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.223 [2024-11-18 00:57:41.407751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:07.223 [2024-11-18 00:57:41.407780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.223 [2024-11-18 00:57:41.410579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.223 [2024-11-18 00:57:41.410635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.223 [2024-11-18 00:57:41.410721] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:07.223 [2024-11-18 00:57:41.410789] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.223 pt1 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.223 "name": "raid_bdev1", 00:15:07.223 "uuid": "f881b710-88ad-4b3b-934e-6c5e8e2f4e1a", 00:15:07.223 "strip_size_kb": 64, 00:15:07.223 "state": "configuring", 00:15:07.223 "raid_level": "raid0", 00:15:07.223 "superblock": true, 00:15:07.223 "num_base_bdevs": 2, 00:15:07.223 "num_base_bdevs_discovered": 1, 00:15:07.223 "num_base_bdevs_operational": 2, 00:15:07.223 "base_bdevs_list": [ 00:15:07.223 { 00:15:07.223 "name": "pt1", 00:15:07.223 "uuid": "aeda8ec8-0a18-529c-936d-999b90a306c3", 00:15:07.223 "is_configured": true, 00:15:07.223 "data_offset": 2048, 00:15:07.223 "data_size": 63488 00:15:07.223 }, 00:15:07.223 { 00:15:07.223 "name": null, 00:15:07.223 "uuid": "b721471e-e311-5e8d-a15c-3991b2a0dab2", 00:15:07.223 "is_configured": false, 00:15:07.223 "data_offset": 2048, 00:15:07.223 "data_size": 63488 00:15:07.223 } 00:15:07.223 ] 00:15:07.223 }' 00:15:07.223 00:57:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.223 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.862 00:57:42 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:07.862 00:57:42 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:07.862 00:57:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:07.862 00:57:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.122 [2024-11-18 00:57:42.303795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.122 [2024-11-18 00:57:42.303934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.122 [2024-11-18 00:57:42.303973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:08.122 [2024-11-18 00:57:42.304001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.122 [2024-11-18 00:57:42.304498] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.122 [2024-11-18 00:57:42.304540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.122 [2024-11-18 00:57:42.304630] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:08.122 [2024-11-18 00:57:42.304658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.122 [2024-11-18 00:57:42.304763] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:08.122 [2024-11-18 00:57:42.304771] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:08.122 [2024-11-18 00:57:42.304848] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:08.122 [2024-11-18 00:57:42.305167] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:08.122 [2024-11-18 00:57:42.305184] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:08.122 [2024-11-18 00:57:42.305285] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.122 pt2 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.122 "name": "raid_bdev1", 00:15:08.122 "uuid": "f881b710-88ad-4b3b-934e-6c5e8e2f4e1a", 00:15:08.122 "strip_size_kb": 64, 00:15:08.122 "state": "online", 00:15:08.122 "raid_level": "raid0", 00:15:08.122 "superblock": true, 00:15:08.122 "num_base_bdevs": 2, 00:15:08.122 "num_base_bdevs_discovered": 2, 00:15:08.122 "num_base_bdevs_operational": 2, 00:15:08.122 "base_bdevs_list": [ 00:15:08.122 { 00:15:08.122 "name": "pt1", 00:15:08.122 "uuid": "aeda8ec8-0a18-529c-936d-999b90a306c3", 00:15:08.122 "is_configured": true, 00:15:08.122 "data_offset": 2048, 00:15:08.122 "data_size": 63488 00:15:08.122 }, 00:15:08.122 { 00:15:08.122 "name": "pt2", 00:15:08.122 "uuid": "b721471e-e311-5e8d-a15c-3991b2a0dab2", 00:15:08.122 "is_configured": true, 00:15:08.122 "data_offset": 2048, 00:15:08.122 "data_size": 63488 00:15:08.122 } 00:15:08.122 ] 00:15:08.122 }' 00:15:08.122 00:57:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.122 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.060 00:57:43 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:09.060 00:57:43 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:09.060 [2024-11-18 00:57:43.384171] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.060 00:57:43 -- bdev/bdev_raid.sh@430 -- # '[' f881b710-88ad-4b3b-934e-6c5e8e2f4e1a '!=' f881b710-88ad-4b3b-934e-6c5e8e2f4e1a ']' 00:15:09.060 00:57:43 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:09.060 00:57:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:09.060 00:57:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:09.060 00:57:43 -- bdev/bdev_raid.sh@511 -- # killprocess 123681 00:15:09.060 00:57:43 -- common/autotest_common.sh@936 -- # '[' -z 123681 ']' 00:15:09.060 00:57:43 -- common/autotest_common.sh@940 -- # kill -0 123681 00:15:09.060 00:57:43 -- common/autotest_common.sh@941 -- # uname 00:15:09.060 00:57:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.060 00:57:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123681 00:15:09.060 killing process with pid 123681 00:15:09.060 00:57:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:09.060 00:57:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:09.060 00:57:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123681' 00:15:09.060 00:57:43 -- common/autotest_common.sh@955 -- # kill 123681 00:15:09.060 [2024-11-18 00:57:43.436459] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.060 00:57:43 -- common/autotest_common.sh@960 -- # wait 123681 00:15:09.060 [2024-11-18 00:57:43.436536] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.060 [2024-11-18 00:57:43.436589] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.060 [2024-11-18 00:57:43.436597] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:09.319 [2024-11-18 00:57:43.476486] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.579 ************************************ 00:15:09.579 END TEST raid_superblock_test 00:15:09.579 ************************************ 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:09.579 00:15:09.579 real 0m7.259s 00:15:09.579 user 0m12.378s 00:15:09.579 sys 0m1.394s 00:15:09.579 00:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:09.579 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:09.579 00:57:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:09.579 00:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.579 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.579 ************************************ 00:15:09.579 START TEST raid_state_function_test 00:15:09.579 ************************************ 00:15:09.579 00:57:43 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=123914 00:15:09.579 Process raid pid: 123914 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123914' 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123914 /var/tmp/spdk-raid.sock 00:15:09.579 00:57:43 -- common/autotest_common.sh@829 -- # '[' -z 123914 ']' 00:15:09.579 00:57:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:09.579 00:57:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.579 00:57:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:09.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:09.579 00:57:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:09.579 00:57:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.579 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.839 [2024-11-18 00:57:44.004313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:09.839 [2024-11-18 00:57:44.004514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.839 [2024-11-18 00:57:44.149928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.839 [2024-11-18 00:57:44.221005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.098 [2024-11-18 00:57:44.298167] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.667 00:57:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.667 00:57:44 -- common/autotest_common.sh@862 -- # return 0 00:15:10.667 00:57:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.926 [2024-11-18 00:57:45.093642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.926 [2024-11-18 00:57:45.093749] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.926 [2024-11-18 00:57:45.093761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.926 [2024-11-18 00:57:45.093781] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.926 00:57:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.185 00:57:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.185 "name": "Existed_Raid", 00:15:11.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.185 "strip_size_kb": 64, 00:15:11.185 "state": "configuring", 00:15:11.185 "raid_level": "concat", 00:15:11.185 "superblock": false, 00:15:11.185 "num_base_bdevs": 2, 00:15:11.185 "num_base_bdevs_discovered": 0, 00:15:11.185 "num_base_bdevs_operational": 2, 00:15:11.185 "base_bdevs_list": [ 00:15:11.185 { 00:15:11.185 "name": "BaseBdev1", 00:15:11.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.185 "is_configured": false, 00:15:11.185 "data_offset": 0, 00:15:11.185 "data_size": 0 00:15:11.185 }, 00:15:11.185 { 00:15:11.185 "name": "BaseBdev2", 00:15:11.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.185 "is_configured": false, 00:15:11.185 "data_offset": 0, 00:15:11.185 "data_size": 0 00:15:11.185 } 00:15:11.185 ] 00:15:11.185 }' 00:15:11.185 00:57:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.185 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.753 00:57:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.012 [2024-11-18 00:57:46.173674] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.012 [2024-11-18 00:57:46.173719] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:12.012 00:57:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:12.012 [2024-11-18 00:57:46.377752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.012 [2024-11-18 00:57:46.377833] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.012 [2024-11-18 00:57:46.377842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.012 [2024-11-18 00:57:46.377883] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.012 00:57:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.272 [2024-11-18 00:57:46.641372] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.272 BaseBdev1 00:15:12.272 00:57:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:12.272 00:57:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:12.272 00:57:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.272 00:57:46 -- common/autotest_common.sh@899 -- # local i 00:15:12.272 00:57:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.272 00:57:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.272 00:57:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.531 00:57:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.790 [ 00:15:12.790 { 00:15:12.790 "name": "BaseBdev1", 00:15:12.790 "aliases": [ 00:15:12.790 "e1a9757d-7cc2-43fd-8a19-32a0728a5c93" 00:15:12.790 ], 00:15:12.790 "product_name": "Malloc disk", 00:15:12.790 "block_size": 512, 00:15:12.790 "num_blocks": 65536, 00:15:12.790 "uuid": "e1a9757d-7cc2-43fd-8a19-32a0728a5c93", 00:15:12.790 "assigned_rate_limits": { 00:15:12.790 "rw_ios_per_sec": 0, 00:15:12.790 "rw_mbytes_per_sec": 0, 00:15:12.790 "r_mbytes_per_sec": 0, 00:15:12.790 "w_mbytes_per_sec": 0 00:15:12.790 }, 00:15:12.790 "claimed": true, 00:15:12.790 "claim_type": "exclusive_write", 00:15:12.790 "zoned": false, 00:15:12.790 "supported_io_types": { 00:15:12.790 "read": true, 00:15:12.790 "write": true, 00:15:12.790 "unmap": true, 00:15:12.790 "write_zeroes": true, 00:15:12.790 "flush": true, 00:15:12.790 "reset": true, 00:15:12.790 "compare": false, 00:15:12.790 "compare_and_write": false, 00:15:12.790 "abort": true, 00:15:12.790 "nvme_admin": false, 00:15:12.790 "nvme_io": false 00:15:12.790 }, 00:15:12.790 "memory_domains": [ 00:15:12.790 { 00:15:12.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.790 "dma_device_type": 2 00:15:12.790 } 00:15:12.790 ], 00:15:12.790 "driver_specific": {} 00:15:12.790 } 00:15:12.790 ] 00:15:12.790 00:57:46 -- common/autotest_common.sh@905 -- # return 0 00:15:12.790 00:57:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:12.790 00:57:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:12.790 00:57:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:12.790 00:57:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:12.790 00:57:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.790 00:57:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:12.790 00:57:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.790 00:57:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.790 00:57:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.790 00:57:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.790 00:57:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.790 00:57:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.050 00:57:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.050 "name": "Existed_Raid", 00:15:13.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.050 "strip_size_kb": 64, 00:15:13.050 "state": "configuring", 00:15:13.050 "raid_level": "concat", 00:15:13.050 "superblock": false, 00:15:13.050 "num_base_bdevs": 2, 00:15:13.050 "num_base_bdevs_discovered": 1, 00:15:13.050 "num_base_bdevs_operational": 2, 00:15:13.050 "base_bdevs_list": [ 00:15:13.050 { 00:15:13.050 "name": "BaseBdev1", 00:15:13.050 "uuid": "e1a9757d-7cc2-43fd-8a19-32a0728a5c93", 00:15:13.050 "is_configured": true, 00:15:13.050 "data_offset": 0, 00:15:13.050 "data_size": 65536 00:15:13.050 }, 00:15:13.050 { 00:15:13.050 "name": "BaseBdev2", 00:15:13.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.050 "is_configured": false, 00:15:13.050 "data_offset": 0, 00:15:13.050 "data_size": 0 00:15:13.050 } 00:15:13.050 ] 00:15:13.050 }' 00:15:13.050 00:57:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.050 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:13.617 00:57:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:13.617 [2024-11-18 00:57:47.897617] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.617 [2024-11-18 00:57:47.897704] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:13.617 00:57:47 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:13.617 00:57:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:13.876 [2024-11-18 00:57:48.069755] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.876 [2024-11-18 00:57:48.072218] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.876 [2024-11-18 00:57:48.072286] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.876 00:57:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.135 00:57:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.135 "name": "Existed_Raid", 00:15:14.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.135 "strip_size_kb": 64, 00:15:14.135 "state": "configuring", 00:15:14.135 "raid_level": "concat", 00:15:14.135 "superblock": false, 00:15:14.135 "num_base_bdevs": 2, 00:15:14.135 "num_base_bdevs_discovered": 1, 00:15:14.135 "num_base_bdevs_operational": 2, 00:15:14.135 "base_bdevs_list": [ 00:15:14.135 { 00:15:14.135 "name": "BaseBdev1", 00:15:14.135 "uuid": "e1a9757d-7cc2-43fd-8a19-32a0728a5c93", 00:15:14.135 "is_configured": true, 00:15:14.135 "data_offset": 0, 00:15:14.135 "data_size": 65536 00:15:14.135 }, 00:15:14.135 { 00:15:14.135 "name": "BaseBdev2", 00:15:14.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.135 "is_configured": false, 00:15:14.135 "data_offset": 0, 00:15:14.135 "data_size": 0 00:15:14.135 } 00:15:14.135 ] 00:15:14.135 }' 00:15:14.135 00:57:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.135 00:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.704 00:57:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.964 [2024-11-18 00:57:49.135401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.964 [2024-11-18 00:57:49.135479] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:14.964 [2024-11-18 00:57:49.135492] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:14.964 [2024-11-18 00:57:49.135675] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:14.964 [2024-11-18 00:57:49.136217] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:14.964 [2024-11-18 00:57:49.136243] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:14.964 [2024-11-18 00:57:49.136553] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.964 BaseBdev2 00:15:14.964 00:57:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:14.964 00:57:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:14.964 00:57:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.964 00:57:49 -- common/autotest_common.sh@899 -- # local i 00:15:14.964 00:57:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.964 00:57:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.964 00:57:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.964 00:57:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.223 [ 00:15:15.223 { 00:15:15.223 "name": "BaseBdev2", 00:15:15.223 "aliases": [ 00:15:15.223 "b89786dc-c179-4abe-b852-e562c020d4a0" 00:15:15.223 ], 00:15:15.223 "product_name": "Malloc disk", 00:15:15.223 "block_size": 512, 00:15:15.223 "num_blocks": 65536, 00:15:15.223 "uuid": "b89786dc-c179-4abe-b852-e562c020d4a0", 00:15:15.223 "assigned_rate_limits": { 00:15:15.223 "rw_ios_per_sec": 0, 00:15:15.223 "rw_mbytes_per_sec": 0, 00:15:15.223 "r_mbytes_per_sec": 0, 00:15:15.223 "w_mbytes_per_sec": 0 00:15:15.223 }, 00:15:15.223 "claimed": true, 00:15:15.223 "claim_type": "exclusive_write", 00:15:15.223 "zoned": false, 00:15:15.223 "supported_io_types": { 00:15:15.223 "read": true, 00:15:15.223 "write": true, 00:15:15.223 "unmap": true, 00:15:15.223 "write_zeroes": true, 00:15:15.223 "flush": true, 00:15:15.223 "reset": true, 00:15:15.223 "compare": false, 00:15:15.223 "compare_and_write": false, 00:15:15.223 "abort": true, 00:15:15.223 "nvme_admin": false, 00:15:15.223 "nvme_io": false 00:15:15.223 }, 00:15:15.223 "memory_domains": [ 00:15:15.223 { 00:15:15.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.223 "dma_device_type": 2 00:15:15.223 } 00:15:15.223 ], 00:15:15.223 "driver_specific": {} 00:15:15.223 } 00:15:15.223 ] 00:15:15.223 00:57:49 -- common/autotest_common.sh@905 -- # return 0 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.223 00:57:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.482 00:57:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.482 "name": "Existed_Raid", 00:15:15.482 "uuid": "b898be73-b587-4e75-b8e2-afc421bb9e8c", 00:15:15.482 "strip_size_kb": 64, 00:15:15.482 "state": "online", 00:15:15.482 "raid_level": "concat", 00:15:15.482 "superblock": false, 00:15:15.482 "num_base_bdevs": 2, 00:15:15.482 "num_base_bdevs_discovered": 2, 00:15:15.482 "num_base_bdevs_operational": 2, 00:15:15.482 "base_bdevs_list": [ 00:15:15.482 { 00:15:15.482 "name": "BaseBdev1", 00:15:15.482 "uuid": "e1a9757d-7cc2-43fd-8a19-32a0728a5c93", 00:15:15.482 "is_configured": true, 00:15:15.482 "data_offset": 0, 00:15:15.482 "data_size": 65536 00:15:15.482 }, 00:15:15.482 { 00:15:15.482 "name": "BaseBdev2", 00:15:15.482 "uuid": "b89786dc-c179-4abe-b852-e562c020d4a0", 00:15:15.482 "is_configured": true, 00:15:15.482 "data_offset": 0, 00:15:15.482 "data_size": 65536 00:15:15.482 } 00:15:15.482 ] 00:15:15.482 }' 00:15:15.482 00:57:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.482 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.051 00:57:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:16.311 [2024-11-18 00:57:50.543889] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.311 [2024-11-18 00:57:50.544152] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.311 [2024-11-18 00:57:50.544386] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.311 00:57:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.570 00:57:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.570 "name": "Existed_Raid", 00:15:16.570 "uuid": "b898be73-b587-4e75-b8e2-afc421bb9e8c", 00:15:16.570 "strip_size_kb": 64, 00:15:16.570 "state": "offline", 00:15:16.570 "raid_level": "concat", 00:15:16.570 "superblock": false, 00:15:16.570 "num_base_bdevs": 2, 00:15:16.570 "num_base_bdevs_discovered": 1, 00:15:16.570 "num_base_bdevs_operational": 1, 00:15:16.570 "base_bdevs_list": [ 00:15:16.570 { 00:15:16.570 "name": null, 00:15:16.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.570 "is_configured": false, 00:15:16.570 "data_offset": 0, 00:15:16.570 "data_size": 65536 00:15:16.570 }, 00:15:16.570 { 00:15:16.570 "name": "BaseBdev2", 00:15:16.570 "uuid": "b89786dc-c179-4abe-b852-e562c020d4a0", 00:15:16.570 "is_configured": true, 00:15:16.570 "data_offset": 0, 00:15:16.570 "data_size": 65536 00:15:16.570 } 00:15:16.570 ] 00:15:16.570 }' 00:15:16.570 00:57:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.570 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.138 00:57:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:17.138 00:57:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:17.138 00:57:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.138 00:57:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:17.398 00:57:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:17.398 00:57:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.398 00:57:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:17.658 [2024-11-18 00:57:52.005248] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.658 [2024-11-18 00:57:52.005517] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:17.658 00:57:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:17.658 00:57:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:17.658 00:57:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.658 00:57:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:17.917 00:57:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:17.917 00:57:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:17.917 00:57:52 -- bdev/bdev_raid.sh@287 -- # killprocess 123914 00:15:17.917 00:57:52 -- common/autotest_common.sh@936 -- # '[' -z 123914 ']' 00:15:17.917 00:57:52 -- common/autotest_common.sh@940 -- # kill -0 123914 00:15:17.917 00:57:52 -- common/autotest_common.sh@941 -- # uname 00:15:17.917 00:57:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.917 00:57:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123914 00:15:18.176 killing process with pid 123914 00:15:18.176 00:57:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.176 00:57:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.176 00:57:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123914' 00:15:18.176 00:57:52 -- common/autotest_common.sh@955 -- # kill 123914 00:15:18.176 00:57:52 -- common/autotest_common.sh@960 -- # wait 123914 00:15:18.176 [2024-11-18 00:57:52.325399] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.176 [2024-11-18 00:57:52.325489] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.436 ************************************ 00:15:18.436 END TEST raid_state_function_test 00:15:18.436 ************************************ 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:18.436 00:15:18.436 real 0m8.774s 00:15:18.436 user 0m15.245s 00:15:18.436 sys 0m1.586s 00:15:18.436 00:57:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:18.436 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:18.436 00:57:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:18.436 00:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:18.436 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.436 ************************************ 00:15:18.436 START TEST raid_state_function_test_sb 00:15:18.436 ************************************ 00:15:18.436 00:57:52 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=124223 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124223' 00:15:18.436 Process raid pid: 124223 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:18.436 00:57:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124223 /var/tmp/spdk-raid.sock 00:15:18.436 00:57:52 -- common/autotest_common.sh@829 -- # '[' -z 124223 ']' 00:15:18.436 00:57:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.436 00:57:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.436 00:57:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.436 00:57:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.436 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.695 [2024-11-18 00:57:52.867729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:18.695 [2024-11-18 00:57:52.868257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.695 [2024-11-18 00:57:53.025306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.954 [2024-11-18 00:57:53.102333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.954 [2024-11-18 00:57:53.179473] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.523 00:57:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.523 00:57:53 -- common/autotest_common.sh@862 -- # return 0 00:15:19.523 00:57:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:19.782 [2024-11-18 00:57:53.959203] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.782 [2024-11-18 00:57:53.959513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.782 [2024-11-18 00:57:53.959601] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.782 [2024-11-18 00:57:53.959657] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.782 00:57:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.041 00:57:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.041 "name": "Existed_Raid", 00:15:20.042 "uuid": "03619015-f662-4fd1-afe6-c30d4eded86a", 00:15:20.042 "strip_size_kb": 64, 00:15:20.042 "state": "configuring", 00:15:20.042 "raid_level": "concat", 00:15:20.042 "superblock": true, 00:15:20.042 "num_base_bdevs": 2, 00:15:20.042 "num_base_bdevs_discovered": 0, 00:15:20.042 "num_base_bdevs_operational": 2, 00:15:20.042 "base_bdevs_list": [ 00:15:20.042 { 00:15:20.042 "name": "BaseBdev1", 00:15:20.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.042 "is_configured": false, 00:15:20.042 "data_offset": 0, 00:15:20.042 "data_size": 0 00:15:20.042 }, 00:15:20.042 { 00:15:20.042 "name": "BaseBdev2", 00:15:20.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.042 "is_configured": false, 00:15:20.042 "data_offset": 0, 00:15:20.042 "data_size": 0 00:15:20.042 } 00:15:20.042 ] 00:15:20.042 }' 00:15:20.042 00:57:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.042 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.610 00:57:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:20.610 [2024-11-18 00:57:54.979254] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.610 [2024-11-18 00:57:54.979515] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:20.610 00:57:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:20.869 [2024-11-18 00:57:55.147361] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.869 [2024-11-18 00:57:55.147631] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.869 [2024-11-18 00:57:55.147740] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.869 [2024-11-18 00:57:55.147797] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.869 00:57:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.129 [2024-11-18 00:57:55.339201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.129 BaseBdev1 00:15:21.129 00:57:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:21.129 00:57:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:21.129 00:57:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:21.129 00:57:55 -- common/autotest_common.sh@899 -- # local i 00:15:21.129 00:57:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:21.129 00:57:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:21.129 00:57:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.388 00:57:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.388 [ 00:15:21.388 { 00:15:21.388 "name": "BaseBdev1", 00:15:21.388 "aliases": [ 00:15:21.388 "1f44da34-e677-4870-974f-cbe5f5bcb4e2" 00:15:21.388 ], 00:15:21.388 "product_name": "Malloc disk", 00:15:21.388 "block_size": 512, 00:15:21.388 "num_blocks": 65536, 00:15:21.388 "uuid": "1f44da34-e677-4870-974f-cbe5f5bcb4e2", 00:15:21.388 "assigned_rate_limits": { 00:15:21.388 "rw_ios_per_sec": 0, 00:15:21.388 "rw_mbytes_per_sec": 0, 00:15:21.388 "r_mbytes_per_sec": 0, 00:15:21.388 "w_mbytes_per_sec": 0 00:15:21.388 }, 00:15:21.388 "claimed": true, 00:15:21.388 "claim_type": "exclusive_write", 00:15:21.388 "zoned": false, 00:15:21.388 "supported_io_types": { 00:15:21.388 "read": true, 00:15:21.388 "write": true, 00:15:21.388 "unmap": true, 00:15:21.388 "write_zeroes": true, 00:15:21.388 "flush": true, 00:15:21.388 "reset": true, 00:15:21.388 "compare": false, 00:15:21.388 "compare_and_write": false, 00:15:21.388 "abort": true, 00:15:21.388 "nvme_admin": false, 00:15:21.388 "nvme_io": false 00:15:21.388 }, 00:15:21.388 "memory_domains": [ 00:15:21.388 { 00:15:21.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.388 "dma_device_type": 2 00:15:21.388 } 00:15:21.388 ], 00:15:21.388 "driver_specific": {} 00:15:21.388 } 00:15:21.388 ] 00:15:21.646 00:57:55 -- common/autotest_common.sh@905 -- # return 0 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.646 "name": "Existed_Raid", 00:15:21.646 "uuid": "a4a81242-401b-478e-a205-789a17877208", 00:15:21.646 "strip_size_kb": 64, 00:15:21.646 "state": "configuring", 00:15:21.646 "raid_level": "concat", 00:15:21.646 "superblock": true, 00:15:21.646 "num_base_bdevs": 2, 00:15:21.646 "num_base_bdevs_discovered": 1, 00:15:21.646 "num_base_bdevs_operational": 2, 00:15:21.646 "base_bdevs_list": [ 00:15:21.646 { 00:15:21.646 "name": "BaseBdev1", 00:15:21.646 "uuid": "1f44da34-e677-4870-974f-cbe5f5bcb4e2", 00:15:21.646 "is_configured": true, 00:15:21.646 "data_offset": 2048, 00:15:21.646 "data_size": 63488 00:15:21.646 }, 00:15:21.646 { 00:15:21.646 "name": "BaseBdev2", 00:15:21.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.646 "is_configured": false, 00:15:21.646 "data_offset": 0, 00:15:21.646 "data_size": 0 00:15:21.646 } 00:15:21.646 ] 00:15:21.646 }' 00:15:21.646 00:57:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.646 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:15:22.214 00:57:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.472 [2024-11-18 00:57:56.779530] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.472 [2024-11-18 00:57:56.779812] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:22.472 00:57:56 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:22.472 00:57:56 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:22.732 00:57:56 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.991 BaseBdev1 00:15:22.991 00:57:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:22.991 00:57:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:22.991 00:57:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:22.991 00:57:57 -- common/autotest_common.sh@899 -- # local i 00:15:22.991 00:57:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:22.991 00:57:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:22.991 00:57:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.250 00:57:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.250 [ 00:15:23.250 { 00:15:23.250 "name": "BaseBdev1", 00:15:23.250 "aliases": [ 00:15:23.250 "1d7dbd07-ed7e-4ded-ae5e-62ffda4b2aed" 00:15:23.250 ], 00:15:23.250 "product_name": "Malloc disk", 00:15:23.250 "block_size": 512, 00:15:23.250 "num_blocks": 65536, 00:15:23.250 "uuid": "1d7dbd07-ed7e-4ded-ae5e-62ffda4b2aed", 00:15:23.250 "assigned_rate_limits": { 00:15:23.250 "rw_ios_per_sec": 0, 00:15:23.250 "rw_mbytes_per_sec": 0, 00:15:23.250 "r_mbytes_per_sec": 0, 00:15:23.250 "w_mbytes_per_sec": 0 00:15:23.250 }, 00:15:23.250 "claimed": false, 00:15:23.250 "zoned": false, 00:15:23.250 "supported_io_types": { 00:15:23.250 "read": true, 00:15:23.250 "write": true, 00:15:23.250 "unmap": true, 00:15:23.250 "write_zeroes": true, 00:15:23.250 "flush": true, 00:15:23.250 "reset": true, 00:15:23.250 "compare": false, 00:15:23.250 "compare_and_write": false, 00:15:23.250 "abort": true, 00:15:23.250 "nvme_admin": false, 00:15:23.250 "nvme_io": false 00:15:23.250 }, 00:15:23.250 "memory_domains": [ 00:15:23.250 { 00:15:23.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.250 "dma_device_type": 2 00:15:23.250 } 00:15:23.250 ], 00:15:23.250 "driver_specific": {} 00:15:23.250 } 00:15:23.250 ] 00:15:23.251 00:57:57 -- common/autotest_common.sh@905 -- # return 0 00:15:23.251 00:57:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:23.510 [2024-11-18 00:57:57.875598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.510 [2024-11-18 00:57:57.878300] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.511 [2024-11-18 00:57:57.878488] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.511 00:57:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.770 00:57:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.770 "name": "Existed_Raid", 00:15:23.770 "uuid": "9b6fdfe7-3f5c-4986-ad90-cd35ed0422e3", 00:15:23.770 "strip_size_kb": 64, 00:15:23.770 "state": "configuring", 00:15:23.770 "raid_level": "concat", 00:15:23.770 "superblock": true, 00:15:23.770 "num_base_bdevs": 2, 00:15:23.770 "num_base_bdevs_discovered": 1, 00:15:23.770 "num_base_bdevs_operational": 2, 00:15:23.770 "base_bdevs_list": [ 00:15:23.770 { 00:15:23.770 "name": "BaseBdev1", 00:15:23.770 "uuid": "1d7dbd07-ed7e-4ded-ae5e-62ffda4b2aed", 00:15:23.770 "is_configured": true, 00:15:23.770 "data_offset": 2048, 00:15:23.770 "data_size": 63488 00:15:23.770 }, 00:15:23.770 { 00:15:23.770 "name": "BaseBdev2", 00:15:23.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.770 "is_configured": false, 00:15:23.770 "data_offset": 0, 00:15:23.770 "data_size": 0 00:15:23.770 } 00:15:23.770 ] 00:15:23.770 }' 00:15:23.770 00:57:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.770 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:15:24.337 00:57:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.596 [2024-11-18 00:57:58.951859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.596 [2024-11-18 00:57:58.952498] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:24.596 [2024-11-18 00:57:58.952669] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:24.596 [2024-11-18 00:57:58.952969] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:24.596 [2024-11-18 00:57:58.953590] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:24.596 [2024-11-18 00:57:58.953760] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:24.596 [2024-11-18 00:57:58.954169] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.596 BaseBdev2 00:15:24.596 00:57:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:24.596 00:57:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:24.596 00:57:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:24.596 00:57:58 -- common/autotest_common.sh@899 -- # local i 00:15:24.596 00:57:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:24.596 00:57:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:24.596 00:57:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.855 00:57:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.114 [ 00:15:25.114 { 00:15:25.114 "name": "BaseBdev2", 00:15:25.114 "aliases": [ 00:15:25.114 "4215a1a2-3305-47b1-aa7c-1ffa80a2a640" 00:15:25.114 ], 00:15:25.114 "product_name": "Malloc disk", 00:15:25.114 "block_size": 512, 00:15:25.114 "num_blocks": 65536, 00:15:25.114 "uuid": "4215a1a2-3305-47b1-aa7c-1ffa80a2a640", 00:15:25.114 "assigned_rate_limits": { 00:15:25.114 "rw_ios_per_sec": 0, 00:15:25.114 "rw_mbytes_per_sec": 0, 00:15:25.114 "r_mbytes_per_sec": 0, 00:15:25.114 "w_mbytes_per_sec": 0 00:15:25.114 }, 00:15:25.114 "claimed": true, 00:15:25.114 "claim_type": "exclusive_write", 00:15:25.114 "zoned": false, 00:15:25.114 "supported_io_types": { 00:15:25.114 "read": true, 00:15:25.114 "write": true, 00:15:25.114 "unmap": true, 00:15:25.114 "write_zeroes": true, 00:15:25.114 "flush": true, 00:15:25.114 "reset": true, 00:15:25.114 "compare": false, 00:15:25.114 "compare_and_write": false, 00:15:25.114 "abort": true, 00:15:25.114 "nvme_admin": false, 00:15:25.114 "nvme_io": false 00:15:25.114 }, 00:15:25.114 "memory_domains": [ 00:15:25.114 { 00:15:25.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.114 "dma_device_type": 2 00:15:25.114 } 00:15:25.114 ], 00:15:25.114 "driver_specific": {} 00:15:25.114 } 00:15:25.114 ] 00:15:25.114 00:57:59 -- common/autotest_common.sh@905 -- # return 0 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.114 00:57:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.372 00:57:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.372 "name": "Existed_Raid", 00:15:25.372 "uuid": "9b6fdfe7-3f5c-4986-ad90-cd35ed0422e3", 00:15:25.372 "strip_size_kb": 64, 00:15:25.372 "state": "online", 00:15:25.372 "raid_level": "concat", 00:15:25.372 "superblock": true, 00:15:25.372 "num_base_bdevs": 2, 00:15:25.372 "num_base_bdevs_discovered": 2, 00:15:25.372 "num_base_bdevs_operational": 2, 00:15:25.372 "base_bdevs_list": [ 00:15:25.372 { 00:15:25.372 "name": "BaseBdev1", 00:15:25.372 "uuid": "1d7dbd07-ed7e-4ded-ae5e-62ffda4b2aed", 00:15:25.372 "is_configured": true, 00:15:25.372 "data_offset": 2048, 00:15:25.372 "data_size": 63488 00:15:25.372 }, 00:15:25.372 { 00:15:25.372 "name": "BaseBdev2", 00:15:25.372 "uuid": "4215a1a2-3305-47b1-aa7c-1ffa80a2a640", 00:15:25.372 "is_configured": true, 00:15:25.372 "data_offset": 2048, 00:15:25.372 "data_size": 63488 00:15:25.372 } 00:15:25.372 ] 00:15:25.372 }' 00:15:25.372 00:57:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.372 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.014 00:58:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:26.272 [2024-11-18 00:58:00.472323] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.272 [2024-11-18 00:58:00.472598] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.272 [2024-11-18 00:58:00.472828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.272 00:58:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.530 00:58:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.531 "name": "Existed_Raid", 00:15:26.531 "uuid": "9b6fdfe7-3f5c-4986-ad90-cd35ed0422e3", 00:15:26.531 "strip_size_kb": 64, 00:15:26.531 "state": "offline", 00:15:26.531 "raid_level": "concat", 00:15:26.531 "superblock": true, 00:15:26.531 "num_base_bdevs": 2, 00:15:26.531 "num_base_bdevs_discovered": 1, 00:15:26.531 "num_base_bdevs_operational": 1, 00:15:26.531 "base_bdevs_list": [ 00:15:26.531 { 00:15:26.531 "name": null, 00:15:26.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.531 "is_configured": false, 00:15:26.531 "data_offset": 2048, 00:15:26.531 "data_size": 63488 00:15:26.531 }, 00:15:26.531 { 00:15:26.531 "name": "BaseBdev2", 00:15:26.531 "uuid": "4215a1a2-3305-47b1-aa7c-1ffa80a2a640", 00:15:26.531 "is_configured": true, 00:15:26.531 "data_offset": 2048, 00:15:26.531 "data_size": 63488 00:15:26.531 } 00:15:26.531 ] 00:15:26.531 }' 00:15:26.531 00:58:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.531 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.097 00:58:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:27.097 00:58:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:27.097 00:58:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.097 00:58:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:27.357 00:58:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:27.357 00:58:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.357 00:58:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:27.357 [2024-11-18 00:58:01.744465] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.357 [2024-11-18 00:58:01.744806] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:27.615 00:58:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:27.615 00:58:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:27.615 00:58:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.615 00:58:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.875 00:58:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:27.875 00:58:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:27.875 00:58:02 -- bdev/bdev_raid.sh@287 -- # killprocess 124223 00:15:27.875 00:58:02 -- common/autotest_common.sh@936 -- # '[' -z 124223 ']' 00:15:27.875 00:58:02 -- common/autotest_common.sh@940 -- # kill -0 124223 00:15:27.875 00:58:02 -- common/autotest_common.sh@941 -- # uname 00:15:27.875 00:58:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.875 00:58:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124223 00:15:27.875 00:58:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.875 00:58:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.875 00:58:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124223' 00:15:27.875 killing process with pid 124223 00:15:27.875 00:58:02 -- common/autotest_common.sh@955 -- # kill 124223 00:15:27.875 00:58:02 -- common/autotest_common.sh@960 -- # wait 124223 00:15:27.875 [2024-11-18 00:58:02.076502] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.875 [2024-11-18 00:58:02.076593] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.134 00:58:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:28.134 00:15:28.134 real 0m9.688s 00:15:28.134 user 0m16.828s 00:15:28.134 sys 0m1.719s 00:15:28.134 00:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:28.134 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:28.134 ************************************ 00:15:28.134 END TEST raid_state_function_test_sb 00:15:28.134 ************************************ 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:28.393 00:58:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:28.393 00:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.393 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:28.393 ************************************ 00:15:28.393 START TEST raid_superblock_test 00:15:28.393 ************************************ 00:15:28.393 00:58:02 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:28.393 00:58:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=124546 00:15:28.394 00:58:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124546 /var/tmp/spdk-raid.sock 00:15:28.394 00:58:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:28.394 00:58:02 -- common/autotest_common.sh@829 -- # '[' -z 124546 ']' 00:15:28.394 00:58:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.394 00:58:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.394 00:58:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.394 00:58:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.394 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:28.394 [2024-11-18 00:58:02.628969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:28.394 [2024-11-18 00:58:02.629519] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124546 ] 00:15:28.394 [2024-11-18 00:58:02.783752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.653 [2024-11-18 00:58:02.861869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.653 [2024-11-18 00:58:02.940501] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.221 00:58:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.221 00:58:03 -- common/autotest_common.sh@862 -- # return 0 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.221 00:58:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:29.480 malloc1 00:15:29.480 00:58:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.739 [2024-11-18 00:58:03.945486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.740 [2024-11-18 00:58:03.945892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.740 [2024-11-18 00:58:03.945985] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:29.740 [2024-11-18 00:58:03.946116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.740 [2024-11-18 00:58:03.949127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.740 [2024-11-18 00:58:03.949314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.740 pt1 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.740 00:58:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:29.999 malloc2 00:15:29.999 00:58:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.999 [2024-11-18 00:58:04.337783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.999 [2024-11-18 00:58:04.338116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.999 [2024-11-18 00:58:04.338219] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:29.999 [2024-11-18 00:58:04.338354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.999 [2024-11-18 00:58:04.341158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.999 [2024-11-18 00:58:04.341318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.999 pt2 00:15:29.999 00:58:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:29.999 00:58:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:29.999 00:58:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:30.258 [2024-11-18 00:58:04.597938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:30.258 [2024-11-18 00:58:04.600728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.258 [2024-11-18 00:58:04.601074] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:15:30.258 [2024-11-18 00:58:04.601182] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.258 [2024-11-18 00:58:04.601415] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:30.258 [2024-11-18 00:58:04.601918] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:15:30.258 [2024-11-18 00:58:04.602023] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:15:30.258 [2024-11-18 00:58:04.602313] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.258 00:58:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.518 00:58:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.518 "name": "raid_bdev1", 00:15:30.518 "uuid": "a4646c31-9c5d-4e71-8245-e5f8c5381ccb", 00:15:30.518 "strip_size_kb": 64, 00:15:30.518 "state": "online", 00:15:30.518 "raid_level": "concat", 00:15:30.518 "superblock": true, 00:15:30.518 "num_base_bdevs": 2, 00:15:30.518 "num_base_bdevs_discovered": 2, 00:15:30.518 "num_base_bdevs_operational": 2, 00:15:30.518 "base_bdevs_list": [ 00:15:30.518 { 00:15:30.518 "name": "pt1", 00:15:30.518 "uuid": "5b68c212-d9f5-57d8-b7ef-e5d566db32af", 00:15:30.518 "is_configured": true, 00:15:30.518 "data_offset": 2048, 00:15:30.518 "data_size": 63488 00:15:30.518 }, 00:15:30.518 { 00:15:30.518 "name": "pt2", 00:15:30.518 "uuid": "2d6b76ae-38d8-5b38-9453-34df01d757e3", 00:15:30.518 "is_configured": true, 00:15:30.518 "data_offset": 2048, 00:15:30.518 "data_size": 63488 00:15:30.518 } 00:15:30.518 ] 00:15:30.518 }' 00:15:30.518 00:58:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.518 00:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:31.086 00:58:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:31.086 00:58:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:31.346 [2024-11-18 00:58:05.691174] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.346 00:58:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a4646c31-9c5d-4e71-8245-e5f8c5381ccb 00:15:31.346 00:58:05 -- bdev/bdev_raid.sh@380 -- # '[' -z a4646c31-9c5d-4e71-8245-e5f8c5381ccb ']' 00:15:31.346 00:58:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:31.605 [2024-11-18 00:58:05.882999] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.605 [2024-11-18 00:58:05.883284] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.605 [2024-11-18 00:58:05.883549] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.605 [2024-11-18 00:58:05.883734] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.605 [2024-11-18 00:58:05.883821] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:15:31.605 00:58:05 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.605 00:58:05 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:31.864 00:58:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:31.864 00:58:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:31.864 00:58:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.864 00:58:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:32.123 00:58:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:32.123 00:58:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:32.383 00:58:06 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:32.383 00:58:06 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:32.643 00:58:06 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:32.643 00:58:06 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.643 00:58:06 -- common/autotest_common.sh@650 -- # local es=0 00:15:32.643 00:58:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.643 00:58:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.643 00:58:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.643 00:58:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.643 00:58:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.643 00:58:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.643 00:58:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.643 00:58:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.643 00:58:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:32.643 00:58:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.643 [2024-11-18 00:58:07.035210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:32.643 [2024-11-18 00:58:07.037985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:32.643 [2024-11-18 00:58:07.038208] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:32.643 [2024-11-18 00:58:07.038444] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:32.643 [2024-11-18 00:58:07.038581] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.643 [2024-11-18 00:58:07.038620] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:32.643 request: 00:15:32.643 { 00:15:32.643 "name": "raid_bdev1", 00:15:32.643 "raid_level": "concat", 00:15:32.643 "base_bdevs": [ 00:15:32.643 "malloc1", 00:15:32.643 "malloc2" 00:15:32.643 ], 00:15:32.643 "superblock": false, 00:15:32.643 "strip_size_kb": 64, 00:15:32.643 "method": "bdev_raid_create", 00:15:32.643 "req_id": 1 00:15:32.643 } 00:15:32.643 Got JSON-RPC error response 00:15:32.643 response: 00:15:32.643 { 00:15:32.643 "code": -17, 00:15:32.643 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:32.643 } 00:15:32.902 00:58:07 -- common/autotest_common.sh@653 -- # es=1 00:15:32.902 00:58:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.902 00:58:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.902 00:58:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.902 00:58:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:32.903 00:58:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.903 00:58:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:32.903 00:58:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:32.903 00:58:07 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:33.162 [2024-11-18 00:58:07.427417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:33.162 [2024-11-18 00:58:07.427790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.162 [2024-11-18 00:58:07.427893] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:33.162 [2024-11-18 00:58:07.427985] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.162 [2024-11-18 00:58:07.430787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.162 [2024-11-18 00:58:07.430941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:33.162 [2024-11-18 00:58:07.431133] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:33.162 [2024-11-18 00:58:07.431285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:33.162 pt1 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.162 00:58:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.421 00:58:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.421 "name": "raid_bdev1", 00:15:33.421 "uuid": "a4646c31-9c5d-4e71-8245-e5f8c5381ccb", 00:15:33.421 "strip_size_kb": 64, 00:15:33.421 "state": "configuring", 00:15:33.421 "raid_level": "concat", 00:15:33.421 "superblock": true, 00:15:33.421 "num_base_bdevs": 2, 00:15:33.421 "num_base_bdevs_discovered": 1, 00:15:33.421 "num_base_bdevs_operational": 2, 00:15:33.421 "base_bdevs_list": [ 00:15:33.421 { 00:15:33.421 "name": "pt1", 00:15:33.421 "uuid": "5b68c212-d9f5-57d8-b7ef-e5d566db32af", 00:15:33.421 "is_configured": true, 00:15:33.421 "data_offset": 2048, 00:15:33.421 "data_size": 63488 00:15:33.421 }, 00:15:33.421 { 00:15:33.421 "name": null, 00:15:33.421 "uuid": "2d6b76ae-38d8-5b38-9453-34df01d757e3", 00:15:33.421 "is_configured": false, 00:15:33.421 "data_offset": 2048, 00:15:33.421 "data_size": 63488 00:15:33.421 } 00:15:33.421 ] 00:15:33.421 }' 00:15:33.421 00:58:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.421 00:58:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.990 00:58:08 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:33.990 00:58:08 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:33.990 00:58:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:33.990 00:58:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.249 [2024-11-18 00:58:08.595777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.249 [2024-11-18 00:58:08.596134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.249 [2024-11-18 00:58:08.596208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:34.249 [2024-11-18 00:58:08.596310] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.249 [2024-11-18 00:58:08.596874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.249 [2024-11-18 00:58:08.597020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.249 [2024-11-18 00:58:08.597189] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:34.249 [2024-11-18 00:58:08.597293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.249 [2024-11-18 00:58:08.597454] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:34.249 [2024-11-18 00:58:08.597611] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.249 [2024-11-18 00:58:08.597728] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:34.249 [2024-11-18 00:58:08.598074] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:34.249 [2024-11-18 00:58:08.598198] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:34.249 [2024-11-18 00:58:08.598381] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.249 pt2 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.249 00:58:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.509 00:58:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.509 "name": "raid_bdev1", 00:15:34.509 "uuid": "a4646c31-9c5d-4e71-8245-e5f8c5381ccb", 00:15:34.509 "strip_size_kb": 64, 00:15:34.509 "state": "online", 00:15:34.509 "raid_level": "concat", 00:15:34.509 "superblock": true, 00:15:34.509 "num_base_bdevs": 2, 00:15:34.509 "num_base_bdevs_discovered": 2, 00:15:34.509 "num_base_bdevs_operational": 2, 00:15:34.509 "base_bdevs_list": [ 00:15:34.509 { 00:15:34.509 "name": "pt1", 00:15:34.509 "uuid": "5b68c212-d9f5-57d8-b7ef-e5d566db32af", 00:15:34.509 "is_configured": true, 00:15:34.509 "data_offset": 2048, 00:15:34.509 "data_size": 63488 00:15:34.509 }, 00:15:34.509 { 00:15:34.509 "name": "pt2", 00:15:34.509 "uuid": "2d6b76ae-38d8-5b38-9453-34df01d757e3", 00:15:34.509 "is_configured": true, 00:15:34.509 "data_offset": 2048, 00:15:34.509 "data_size": 63488 00:15:34.509 } 00:15:34.509 ] 00:15:34.509 }' 00:15:34.509 00:58:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.509 00:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:35.076 00:58:09 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:35.076 00:58:09 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:35.334 [2024-11-18 00:58:09.664158] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.334 00:58:09 -- bdev/bdev_raid.sh@430 -- # '[' a4646c31-9c5d-4e71-8245-e5f8c5381ccb '!=' a4646c31-9c5d-4e71-8245-e5f8c5381ccb ']' 00:15:35.334 00:58:09 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:35.334 00:58:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:35.335 00:58:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:35.335 00:58:09 -- bdev/bdev_raid.sh@511 -- # killprocess 124546 00:15:35.335 00:58:09 -- common/autotest_common.sh@936 -- # '[' -z 124546 ']' 00:15:35.335 00:58:09 -- common/autotest_common.sh@940 -- # kill -0 124546 00:15:35.335 00:58:09 -- common/autotest_common.sh@941 -- # uname 00:15:35.335 00:58:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.335 00:58:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124546 00:15:35.335 00:58:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.335 00:58:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.335 00:58:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124546' 00:15:35.335 killing process with pid 124546 00:15:35.592 00:58:09 -- common/autotest_common.sh@955 -- # kill 124546 00:15:35.592 00:58:09 -- common/autotest_common.sh@960 -- # wait 124546 00:15:35.592 [2024-11-18 00:58:09.735441] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.592 [2024-11-18 00:58:09.735554] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.592 [2024-11-18 00:58:09.735612] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.592 [2024-11-18 00:58:09.735622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:35.592 [2024-11-18 00:58:09.778505] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.851 00:58:10 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:35.851 00:15:35.851 real 0m7.623s 00:15:35.851 user 0m12.987s 00:15:35.851 sys 0m1.492s 00:15:35.851 00:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:35.851 00:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:35.851 ************************************ 00:15:35.852 END TEST raid_superblock_test 00:15:35.852 ************************************ 00:15:35.852 00:58:10 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:35.852 00:58:10 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:35.852 00:58:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:35.852 00:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.852 00:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.111 ************************************ 00:15:36.111 START TEST raid_state_function_test 00:15:36.111 ************************************ 00:15:36.111 00:58:10 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=124785 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124785' 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:36.111 Process raid pid: 124785 00:15:36.111 00:58:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124785 /var/tmp/spdk-raid.sock 00:15:36.111 00:58:10 -- common/autotest_common.sh@829 -- # '[' -z 124785 ']' 00:15:36.111 00:58:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.111 00:58:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.111 00:58:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.111 00:58:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.111 00:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.111 [2024-11-18 00:58:10.334792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:36.111 [2024-11-18 00:58:10.335360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.111 [2024-11-18 00:58:10.487996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.370 [2024-11-18 00:58:10.568427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.370 [2024-11-18 00:58:10.647345] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.938 00:58:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.938 00:58:11 -- common/autotest_common.sh@862 -- # return 0 00:15:36.938 00:58:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:37.197 [2024-11-18 00:58:11.455834] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.197 [2024-11-18 00:58:11.456195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.197 [2024-11-18 00:58:11.456289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.197 [2024-11-18 00:58:11.456343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.197 00:58:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.457 00:58:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.457 "name": "Existed_Raid", 00:15:37.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.457 "strip_size_kb": 0, 00:15:37.457 "state": "configuring", 00:15:37.457 "raid_level": "raid1", 00:15:37.457 "superblock": false, 00:15:37.457 "num_base_bdevs": 2, 00:15:37.457 "num_base_bdevs_discovered": 0, 00:15:37.457 "num_base_bdevs_operational": 2, 00:15:37.457 "base_bdevs_list": [ 00:15:37.457 { 00:15:37.457 "name": "BaseBdev1", 00:15:37.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.457 "is_configured": false, 00:15:37.457 "data_offset": 0, 00:15:37.457 "data_size": 0 00:15:37.457 }, 00:15:37.457 { 00:15:37.457 "name": "BaseBdev2", 00:15:37.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.457 "is_configured": false, 00:15:37.457 "data_offset": 0, 00:15:37.457 "data_size": 0 00:15:37.457 } 00:15:37.457 ] 00:15:37.457 }' 00:15:37.457 00:58:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.457 00:58:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.025 00:58:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:38.296 [2024-11-18 00:58:12.483895] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.296 [2024-11-18 00:58:12.484219] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:38.296 00:58:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:38.296 [2024-11-18 00:58:12.668008] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.296 [2024-11-18 00:58:12.668369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.296 [2024-11-18 00:58:12.668454] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.296 [2024-11-18 00:58:12.668514] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.566 00:58:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.566 [2024-11-18 00:58:12.948280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.566 BaseBdev1 00:15:38.826 00:58:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:38.826 00:58:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:38.826 00:58:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:38.826 00:58:12 -- common/autotest_common.sh@899 -- # local i 00:15:38.826 00:58:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:38.826 00:58:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:38.826 00:58:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.826 00:58:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.086 [ 00:15:39.086 { 00:15:39.086 "name": "BaseBdev1", 00:15:39.086 "aliases": [ 00:15:39.086 "69b563f4-4121-4009-a4e2-e0a7425f24c2" 00:15:39.086 ], 00:15:39.086 "product_name": "Malloc disk", 00:15:39.086 "block_size": 512, 00:15:39.086 "num_blocks": 65536, 00:15:39.086 "uuid": "69b563f4-4121-4009-a4e2-e0a7425f24c2", 00:15:39.086 "assigned_rate_limits": { 00:15:39.086 "rw_ios_per_sec": 0, 00:15:39.086 "rw_mbytes_per_sec": 0, 00:15:39.086 "r_mbytes_per_sec": 0, 00:15:39.086 "w_mbytes_per_sec": 0 00:15:39.086 }, 00:15:39.086 "claimed": true, 00:15:39.086 "claim_type": "exclusive_write", 00:15:39.086 "zoned": false, 00:15:39.086 "supported_io_types": { 00:15:39.086 "read": true, 00:15:39.086 "write": true, 00:15:39.086 "unmap": true, 00:15:39.086 "write_zeroes": true, 00:15:39.086 "flush": true, 00:15:39.086 "reset": true, 00:15:39.086 "compare": false, 00:15:39.086 "compare_and_write": false, 00:15:39.086 "abort": true, 00:15:39.086 "nvme_admin": false, 00:15:39.086 "nvme_io": false 00:15:39.086 }, 00:15:39.086 "memory_domains": [ 00:15:39.086 { 00:15:39.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.086 "dma_device_type": 2 00:15:39.086 } 00:15:39.086 ], 00:15:39.086 "driver_specific": {} 00:15:39.086 } 00:15:39.086 ] 00:15:39.086 00:58:13 -- common/autotest_common.sh@905 -- # return 0 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.086 00:58:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.346 00:58:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.346 "name": "Existed_Raid", 00:15:39.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.346 "strip_size_kb": 0, 00:15:39.346 "state": "configuring", 00:15:39.346 "raid_level": "raid1", 00:15:39.346 "superblock": false, 00:15:39.346 "num_base_bdevs": 2, 00:15:39.346 "num_base_bdevs_discovered": 1, 00:15:39.346 "num_base_bdevs_operational": 2, 00:15:39.346 "base_bdevs_list": [ 00:15:39.346 { 00:15:39.346 "name": "BaseBdev1", 00:15:39.346 "uuid": "69b563f4-4121-4009-a4e2-e0a7425f24c2", 00:15:39.346 "is_configured": true, 00:15:39.346 "data_offset": 0, 00:15:39.346 "data_size": 65536 00:15:39.346 }, 00:15:39.346 { 00:15:39.346 "name": "BaseBdev2", 00:15:39.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.346 "is_configured": false, 00:15:39.346 "data_offset": 0, 00:15:39.346 "data_size": 0 00:15:39.346 } 00:15:39.346 ] 00:15:39.346 }' 00:15:39.346 00:58:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.346 00:58:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 00:58:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:40.174 [2024-11-18 00:58:14.368578] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.174 [2024-11-18 00:58:14.368869] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:40.174 00:58:14 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:40.174 00:58:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:40.174 [2024-11-18 00:58:14.560723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.174 [2024-11-18 00:58:14.563396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.174 [2024-11-18 00:58:14.563587] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.434 00:58:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.693 00:58:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.693 "name": "Existed_Raid", 00:15:40.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.693 "strip_size_kb": 0, 00:15:40.693 "state": "configuring", 00:15:40.693 "raid_level": "raid1", 00:15:40.693 "superblock": false, 00:15:40.693 "num_base_bdevs": 2, 00:15:40.693 "num_base_bdevs_discovered": 1, 00:15:40.693 "num_base_bdevs_operational": 2, 00:15:40.693 "base_bdevs_list": [ 00:15:40.693 { 00:15:40.693 "name": "BaseBdev1", 00:15:40.693 "uuid": "69b563f4-4121-4009-a4e2-e0a7425f24c2", 00:15:40.693 "is_configured": true, 00:15:40.693 "data_offset": 0, 00:15:40.693 "data_size": 65536 00:15:40.693 }, 00:15:40.693 { 00:15:40.693 "name": "BaseBdev2", 00:15:40.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.693 "is_configured": false, 00:15:40.693 "data_offset": 0, 00:15:40.693 "data_size": 0 00:15:40.693 } 00:15:40.693 ] 00:15:40.693 }' 00:15:40.693 00:58:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.693 00:58:14 -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 00:58:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.262 [2024-11-18 00:58:15.627998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.262 [2024-11-18 00:58:15.628317] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:41.262 [2024-11-18 00:58:15.628508] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:41.262 [2024-11-18 00:58:15.628929] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:41.262 [2024-11-18 00:58:15.629789] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:41.262 [2024-11-18 00:58:15.629974] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:41.262 [2024-11-18 00:58:15.630520] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.262 BaseBdev2 00:15:41.262 00:58:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:41.262 00:58:15 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:41.262 00:58:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:41.262 00:58:15 -- common/autotest_common.sh@899 -- # local i 00:15:41.262 00:58:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:41.262 00:58:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:41.262 00:58:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.522 00:58:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.782 [ 00:15:41.782 { 00:15:41.782 "name": "BaseBdev2", 00:15:41.782 "aliases": [ 00:15:41.782 "5f025e83-0954-40d7-a766-ffd32f489ec1" 00:15:41.782 ], 00:15:41.782 "product_name": "Malloc disk", 00:15:41.782 "block_size": 512, 00:15:41.782 "num_blocks": 65536, 00:15:41.782 "uuid": "5f025e83-0954-40d7-a766-ffd32f489ec1", 00:15:41.782 "assigned_rate_limits": { 00:15:41.782 "rw_ios_per_sec": 0, 00:15:41.782 "rw_mbytes_per_sec": 0, 00:15:41.782 "r_mbytes_per_sec": 0, 00:15:41.782 "w_mbytes_per_sec": 0 00:15:41.782 }, 00:15:41.782 "claimed": true, 00:15:41.782 "claim_type": "exclusive_write", 00:15:41.782 "zoned": false, 00:15:41.782 "supported_io_types": { 00:15:41.782 "read": true, 00:15:41.782 "write": true, 00:15:41.782 "unmap": true, 00:15:41.782 "write_zeroes": true, 00:15:41.782 "flush": true, 00:15:41.782 "reset": true, 00:15:41.782 "compare": false, 00:15:41.782 "compare_and_write": false, 00:15:41.782 "abort": true, 00:15:41.782 "nvme_admin": false, 00:15:41.782 "nvme_io": false 00:15:41.782 }, 00:15:41.782 "memory_domains": [ 00:15:41.782 { 00:15:41.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.782 "dma_device_type": 2 00:15:41.782 } 00:15:41.782 ], 00:15:41.782 "driver_specific": {} 00:15:41.782 } 00:15:41.782 ] 00:15:41.782 00:58:16 -- common/autotest_common.sh@905 -- # return 0 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.782 00:58:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.042 00:58:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.042 "name": "Existed_Raid", 00:15:42.042 "uuid": "1900b452-23ae-4e31-bce2-5161be721a32", 00:15:42.042 "strip_size_kb": 0, 00:15:42.042 "state": "online", 00:15:42.042 "raid_level": "raid1", 00:15:42.042 "superblock": false, 00:15:42.042 "num_base_bdevs": 2, 00:15:42.042 "num_base_bdevs_discovered": 2, 00:15:42.042 "num_base_bdevs_operational": 2, 00:15:42.042 "base_bdevs_list": [ 00:15:42.042 { 00:15:42.042 "name": "BaseBdev1", 00:15:42.042 "uuid": "69b563f4-4121-4009-a4e2-e0a7425f24c2", 00:15:42.042 "is_configured": true, 00:15:42.042 "data_offset": 0, 00:15:42.042 "data_size": 65536 00:15:42.042 }, 00:15:42.042 { 00:15:42.042 "name": "BaseBdev2", 00:15:42.042 "uuid": "5f025e83-0954-40d7-a766-ffd32f489ec1", 00:15:42.042 "is_configured": true, 00:15:42.042 "data_offset": 0, 00:15:42.042 "data_size": 65536 00:15:42.042 } 00:15:42.042 ] 00:15:42.042 }' 00:15:42.042 00:58:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.042 00:58:16 -- common/autotest_common.sh@10 -- # set +x 00:15:42.608 00:58:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:42.866 [2024-11-18 00:58:17.072460] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.866 00:58:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.867 00:58:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.867 00:58:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.867 00:58:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.867 00:58:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.125 00:58:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.125 "name": "Existed_Raid", 00:15:43.125 "uuid": "1900b452-23ae-4e31-bce2-5161be721a32", 00:15:43.125 "strip_size_kb": 0, 00:15:43.125 "state": "online", 00:15:43.125 "raid_level": "raid1", 00:15:43.125 "superblock": false, 00:15:43.125 "num_base_bdevs": 2, 00:15:43.125 "num_base_bdevs_discovered": 1, 00:15:43.125 "num_base_bdevs_operational": 1, 00:15:43.125 "base_bdevs_list": [ 00:15:43.125 { 00:15:43.125 "name": null, 00:15:43.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.125 "is_configured": false, 00:15:43.125 "data_offset": 0, 00:15:43.125 "data_size": 65536 00:15:43.125 }, 00:15:43.125 { 00:15:43.125 "name": "BaseBdev2", 00:15:43.125 "uuid": "5f025e83-0954-40d7-a766-ffd32f489ec1", 00:15:43.125 "is_configured": true, 00:15:43.125 "data_offset": 0, 00:15:43.125 "data_size": 65536 00:15:43.125 } 00:15:43.125 ] 00:15:43.125 }' 00:15:43.125 00:58:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.125 00:58:17 -- common/autotest_common.sh@10 -- # set +x 00:15:43.692 00:58:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:43.692 00:58:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:43.692 00:58:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.692 00:58:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:43.951 00:58:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:43.951 00:58:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.951 00:58:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:43.951 [2024-11-18 00:58:18.278302] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.951 [2024-11-18 00:58:18.278625] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.951 [2024-11-18 00:58:18.278860] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.951 [2024-11-18 00:58:18.300572] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.951 [2024-11-18 00:58:18.300851] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:43.951 00:58:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:43.951 00:58:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:43.951 00:58:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.951 00:58:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.210 00:58:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:44.210 00:58:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:44.210 00:58:18 -- bdev/bdev_raid.sh@287 -- # killprocess 124785 00:15:44.210 00:58:18 -- common/autotest_common.sh@936 -- # '[' -z 124785 ']' 00:15:44.210 00:58:18 -- common/autotest_common.sh@940 -- # kill -0 124785 00:15:44.210 00:58:18 -- common/autotest_common.sh@941 -- # uname 00:15:44.210 00:58:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.210 00:58:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124785 00:15:44.210 00:58:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.210 00:58:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.210 00:58:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124785' 00:15:44.210 killing process with pid 124785 00:15:44.468 00:58:18 -- common/autotest_common.sh@955 -- # kill 124785 00:15:44.468 [2024-11-18 00:58:18.610888] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.468 00:58:18 -- common/autotest_common.sh@960 -- # wait 124785 00:15:44.468 [2024-11-18 00:58:18.611177] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.727 ************************************ 00:15:44.727 END TEST raid_state_function_test 00:15:44.727 ************************************ 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:44.727 00:15:44.727 real 0m8.764s 00:15:44.727 user 0m15.164s 00:15:44.727 sys 0m1.541s 00:15:44.727 00:58:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:44.727 00:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:44.727 00:58:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:44.727 00:58:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.727 00:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:44.727 ************************************ 00:15:44.727 START TEST raid_state_function_test_sb 00:15:44.727 ************************************ 00:15:44.727 00:58:19 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=125096 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125096' 00:15:44.727 Process raid pid: 125096 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:44.727 00:58:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125096 /var/tmp/spdk-raid.sock 00:15:44.727 00:58:19 -- common/autotest_common.sh@829 -- # '[' -z 125096 ']' 00:15:44.727 00:58:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:44.727 00:58:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.727 00:58:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:44.727 00:58:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.727 00:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:45.000 [2024-11-18 00:58:19.153072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:45.000 [2024-11-18 00:58:19.153282] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.000 [2024-11-18 00:58:19.297959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.000 [2024-11-18 00:58:19.380132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.264 [2024-11-18 00:58:19.459095] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.828 00:58:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.828 00:58:19 -- common/autotest_common.sh@862 -- # return 0 00:15:45.828 00:58:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:45.828 [2024-11-18 00:58:20.224126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.828 [2024-11-18 00:58:20.224243] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.828 [2024-11-18 00:58:20.224255] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.828 [2024-11-18 00:58:20.224275] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.086 00:58:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.086 "name": "Existed_Raid", 00:15:46.086 "uuid": "4565ffb5-11df-45bf-be99-f22e24b05122", 00:15:46.086 "strip_size_kb": 0, 00:15:46.086 "state": "configuring", 00:15:46.086 "raid_level": "raid1", 00:15:46.086 "superblock": true, 00:15:46.086 "num_base_bdevs": 2, 00:15:46.086 "num_base_bdevs_discovered": 0, 00:15:46.086 "num_base_bdevs_operational": 2, 00:15:46.086 "base_bdevs_list": [ 00:15:46.086 { 00:15:46.086 "name": "BaseBdev1", 00:15:46.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.086 "is_configured": false, 00:15:46.086 "data_offset": 0, 00:15:46.086 "data_size": 0 00:15:46.087 }, 00:15:46.087 { 00:15:46.087 "name": "BaseBdev2", 00:15:46.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.087 "is_configured": false, 00:15:46.087 "data_offset": 0, 00:15:46.087 "data_size": 0 00:15:46.087 } 00:15:46.087 ] 00:15:46.087 }' 00:15:46.087 00:58:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.087 00:58:20 -- common/autotest_common.sh@10 -- # set +x 00:15:47.019 00:58:21 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:47.019 [2024-11-18 00:58:21.316099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.019 [2024-11-18 00:58:21.316159] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:47.019 00:58:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:47.277 [2024-11-18 00:58:21.504196] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.277 [2024-11-18 00:58:21.504297] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.277 [2024-11-18 00:58:21.504308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.277 [2024-11-18 00:58:21.504335] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.277 00:58:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.535 [2024-11-18 00:58:21.800251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.535 BaseBdev1 00:15:47.535 00:58:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:47.535 00:58:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:47.535 00:58:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:47.535 00:58:21 -- common/autotest_common.sh@899 -- # local i 00:15:47.535 00:58:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:47.535 00:58:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:47.535 00:58:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.794 00:58:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:48.051 [ 00:15:48.051 { 00:15:48.051 "name": "BaseBdev1", 00:15:48.051 "aliases": [ 00:15:48.051 "1a3d4dbb-b35d-43aa-a082-427a58feec97" 00:15:48.051 ], 00:15:48.051 "product_name": "Malloc disk", 00:15:48.051 "block_size": 512, 00:15:48.051 "num_blocks": 65536, 00:15:48.051 "uuid": "1a3d4dbb-b35d-43aa-a082-427a58feec97", 00:15:48.051 "assigned_rate_limits": { 00:15:48.051 "rw_ios_per_sec": 0, 00:15:48.051 "rw_mbytes_per_sec": 0, 00:15:48.051 "r_mbytes_per_sec": 0, 00:15:48.051 "w_mbytes_per_sec": 0 00:15:48.051 }, 00:15:48.051 "claimed": true, 00:15:48.051 "claim_type": "exclusive_write", 00:15:48.051 "zoned": false, 00:15:48.051 "supported_io_types": { 00:15:48.051 "read": true, 00:15:48.051 "write": true, 00:15:48.051 "unmap": true, 00:15:48.051 "write_zeroes": true, 00:15:48.051 "flush": true, 00:15:48.051 "reset": true, 00:15:48.051 "compare": false, 00:15:48.051 "compare_and_write": false, 00:15:48.051 "abort": true, 00:15:48.051 "nvme_admin": false, 00:15:48.051 "nvme_io": false 00:15:48.051 }, 00:15:48.051 "memory_domains": [ 00:15:48.051 { 00:15:48.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.051 "dma_device_type": 2 00:15:48.051 } 00:15:48.051 ], 00:15:48.051 "driver_specific": {} 00:15:48.051 } 00:15:48.051 ] 00:15:48.051 00:58:22 -- common/autotest_common.sh@905 -- # return 0 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.051 00:58:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.308 00:58:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.308 "name": "Existed_Raid", 00:15:48.308 "uuid": "bf44fd51-e7c8-49f4-bbf0-2b9f2b856be2", 00:15:48.308 "strip_size_kb": 0, 00:15:48.308 "state": "configuring", 00:15:48.308 "raid_level": "raid1", 00:15:48.308 "superblock": true, 00:15:48.308 "num_base_bdevs": 2, 00:15:48.308 "num_base_bdevs_discovered": 1, 00:15:48.308 "num_base_bdevs_operational": 2, 00:15:48.308 "base_bdevs_list": [ 00:15:48.308 { 00:15:48.308 "name": "BaseBdev1", 00:15:48.308 "uuid": "1a3d4dbb-b35d-43aa-a082-427a58feec97", 00:15:48.308 "is_configured": true, 00:15:48.308 "data_offset": 2048, 00:15:48.308 "data_size": 63488 00:15:48.308 }, 00:15:48.308 { 00:15:48.308 "name": "BaseBdev2", 00:15:48.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.308 "is_configured": false, 00:15:48.308 "data_offset": 0, 00:15:48.308 "data_size": 0 00:15:48.308 } 00:15:48.308 ] 00:15:48.308 }' 00:15:48.308 00:58:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.308 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:15:48.873 00:58:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.873 [2024-11-18 00:58:23.244573] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.873 [2024-11-18 00:58:23.244666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:48.873 00:58:23 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:48.873 00:58:23 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:49.131 00:58:23 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.389 BaseBdev1 00:15:49.389 00:58:23 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:49.389 00:58:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:49.389 00:58:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:49.389 00:58:23 -- common/autotest_common.sh@899 -- # local i 00:15:49.389 00:58:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:49.389 00:58:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:49.390 00:58:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.647 00:58:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.905 [ 00:15:49.905 { 00:15:49.905 "name": "BaseBdev1", 00:15:49.905 "aliases": [ 00:15:49.905 "f059011a-d809-4204-b1a4-f6c2b4505d20" 00:15:49.905 ], 00:15:49.905 "product_name": "Malloc disk", 00:15:49.905 "block_size": 512, 00:15:49.905 "num_blocks": 65536, 00:15:49.905 "uuid": "f059011a-d809-4204-b1a4-f6c2b4505d20", 00:15:49.905 "assigned_rate_limits": { 00:15:49.905 "rw_ios_per_sec": 0, 00:15:49.905 "rw_mbytes_per_sec": 0, 00:15:49.905 "r_mbytes_per_sec": 0, 00:15:49.905 "w_mbytes_per_sec": 0 00:15:49.905 }, 00:15:49.905 "claimed": false, 00:15:49.905 "zoned": false, 00:15:49.905 "supported_io_types": { 00:15:49.905 "read": true, 00:15:49.905 "write": true, 00:15:49.905 "unmap": true, 00:15:49.905 "write_zeroes": true, 00:15:49.905 "flush": true, 00:15:49.905 "reset": true, 00:15:49.905 "compare": false, 00:15:49.905 "compare_and_write": false, 00:15:49.905 "abort": true, 00:15:49.905 "nvme_admin": false, 00:15:49.905 "nvme_io": false 00:15:49.905 }, 00:15:49.905 "memory_domains": [ 00:15:49.905 { 00:15:49.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.905 "dma_device_type": 2 00:15:49.905 } 00:15:49.905 ], 00:15:49.905 "driver_specific": {} 00:15:49.905 } 00:15:49.905 ] 00:15:49.905 00:58:24 -- common/autotest_common.sh@905 -- # return 0 00:15:49.905 00:58:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:50.164 [2024-11-18 00:58:24.425804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.164 [2024-11-18 00:58:24.428281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.164 [2024-11-18 00:58:24.428363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.164 00:58:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.422 00:58:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.422 "name": "Existed_Raid", 00:15:50.422 "uuid": "c68c79ef-6a90-41a5-8c4a-8eed3735a2bf", 00:15:50.422 "strip_size_kb": 0, 00:15:50.422 "state": "configuring", 00:15:50.422 "raid_level": "raid1", 00:15:50.422 "superblock": true, 00:15:50.422 "num_base_bdevs": 2, 00:15:50.422 "num_base_bdevs_discovered": 1, 00:15:50.422 "num_base_bdevs_operational": 2, 00:15:50.422 "base_bdevs_list": [ 00:15:50.422 { 00:15:50.422 "name": "BaseBdev1", 00:15:50.423 "uuid": "f059011a-d809-4204-b1a4-f6c2b4505d20", 00:15:50.423 "is_configured": true, 00:15:50.423 "data_offset": 2048, 00:15:50.423 "data_size": 63488 00:15:50.423 }, 00:15:50.423 { 00:15:50.423 "name": "BaseBdev2", 00:15:50.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.423 "is_configured": false, 00:15:50.423 "data_offset": 0, 00:15:50.423 "data_size": 0 00:15:50.423 } 00:15:50.423 ] 00:15:50.423 }' 00:15:50.423 00:58:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.423 00:58:24 -- common/autotest_common.sh@10 -- # set +x 00:15:50.989 00:58:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.248 [2024-11-18 00:58:25.400159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.248 [2024-11-18 00:58:25.400489] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:51.248 [2024-11-18 00:58:25.400509] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.248 [2024-11-18 00:58:25.400712] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:51.248 [2024-11-18 00:58:25.401318] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:51.248 [2024-11-18 00:58:25.401346] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:51.248 [2024-11-18 00:58:25.401582] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.248 BaseBdev2 00:15:51.248 00:58:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:51.248 00:58:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:51.248 00:58:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:51.248 00:58:25 -- common/autotest_common.sh@899 -- # local i 00:15:51.248 00:58:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:51.248 00:58:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:51.248 00:58:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.512 00:58:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.512 [ 00:15:51.512 { 00:15:51.512 "name": "BaseBdev2", 00:15:51.512 "aliases": [ 00:15:51.512 "b4edaa5e-004d-41ae-9dff-9777b3f44af0" 00:15:51.512 ], 00:15:51.512 "product_name": "Malloc disk", 00:15:51.512 "block_size": 512, 00:15:51.512 "num_blocks": 65536, 00:15:51.512 "uuid": "b4edaa5e-004d-41ae-9dff-9777b3f44af0", 00:15:51.512 "assigned_rate_limits": { 00:15:51.512 "rw_ios_per_sec": 0, 00:15:51.512 "rw_mbytes_per_sec": 0, 00:15:51.512 "r_mbytes_per_sec": 0, 00:15:51.512 "w_mbytes_per_sec": 0 00:15:51.512 }, 00:15:51.512 "claimed": true, 00:15:51.512 "claim_type": "exclusive_write", 00:15:51.512 "zoned": false, 00:15:51.512 "supported_io_types": { 00:15:51.512 "read": true, 00:15:51.512 "write": true, 00:15:51.512 "unmap": true, 00:15:51.512 "write_zeroes": true, 00:15:51.512 "flush": true, 00:15:51.512 "reset": true, 00:15:51.512 "compare": false, 00:15:51.512 "compare_and_write": false, 00:15:51.512 "abort": true, 00:15:51.512 "nvme_admin": false, 00:15:51.512 "nvme_io": false 00:15:51.512 }, 00:15:51.512 "memory_domains": [ 00:15:51.512 { 00:15:51.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.512 "dma_device_type": 2 00:15:51.512 } 00:15:51.512 ], 00:15:51.512 "driver_specific": {} 00:15:51.512 } 00:15:51.512 ] 00:15:51.512 00:58:25 -- common/autotest_common.sh@905 -- # return 0 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.512 00:58:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.779 00:58:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.779 "name": "Existed_Raid", 00:15:51.779 "uuid": "c68c79ef-6a90-41a5-8c4a-8eed3735a2bf", 00:15:51.779 "strip_size_kb": 0, 00:15:51.779 "state": "online", 00:15:51.779 "raid_level": "raid1", 00:15:51.779 "superblock": true, 00:15:51.779 "num_base_bdevs": 2, 00:15:51.779 "num_base_bdevs_discovered": 2, 00:15:51.779 "num_base_bdevs_operational": 2, 00:15:51.779 "base_bdevs_list": [ 00:15:51.779 { 00:15:51.779 "name": "BaseBdev1", 00:15:51.779 "uuid": "f059011a-d809-4204-b1a4-f6c2b4505d20", 00:15:51.779 "is_configured": true, 00:15:51.779 "data_offset": 2048, 00:15:51.779 "data_size": 63488 00:15:51.779 }, 00:15:51.779 { 00:15:51.779 "name": "BaseBdev2", 00:15:51.780 "uuid": "b4edaa5e-004d-41ae-9dff-9777b3f44af0", 00:15:51.780 "is_configured": true, 00:15:51.780 "data_offset": 2048, 00:15:51.780 "data_size": 63488 00:15:51.780 } 00:15:51.780 ] 00:15:51.780 }' 00:15:51.780 00:58:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.780 00:58:26 -- common/autotest_common.sh@10 -- # set +x 00:15:52.347 00:58:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:52.606 [2024-11-18 00:58:26.844558] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.606 00:58:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.870 00:58:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.870 "name": "Existed_Raid", 00:15:52.870 "uuid": "c68c79ef-6a90-41a5-8c4a-8eed3735a2bf", 00:15:52.870 "strip_size_kb": 0, 00:15:52.870 "state": "online", 00:15:52.870 "raid_level": "raid1", 00:15:52.870 "superblock": true, 00:15:52.870 "num_base_bdevs": 2, 00:15:52.870 "num_base_bdevs_discovered": 1, 00:15:52.870 "num_base_bdevs_operational": 1, 00:15:52.870 "base_bdevs_list": [ 00:15:52.870 { 00:15:52.870 "name": null, 00:15:52.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.870 "is_configured": false, 00:15:52.870 "data_offset": 2048, 00:15:52.870 "data_size": 63488 00:15:52.870 }, 00:15:52.870 { 00:15:52.870 "name": "BaseBdev2", 00:15:52.870 "uuid": "b4edaa5e-004d-41ae-9dff-9777b3f44af0", 00:15:52.870 "is_configured": true, 00:15:52.870 "data_offset": 2048, 00:15:52.870 "data_size": 63488 00:15:52.870 } 00:15:52.870 ] 00:15:52.870 }' 00:15:52.870 00:58:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.870 00:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:53.437 00:58:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:53.437 00:58:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:53.437 00:58:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.437 00:58:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:53.695 00:58:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:53.695 00:58:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.695 00:58:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:53.954 [2024-11-18 00:58:28.288731] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.954 [2024-11-18 00:58:28.288776] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.954 [2024-11-18 00:58:28.288862] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.954 [2024-11-18 00:58:28.309907] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.954 [2024-11-18 00:58:28.309939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:53.954 00:58:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:53.954 00:58:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:53.954 00:58:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.954 00:58:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.213 00:58:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:54.213 00:58:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:54.213 00:58:28 -- bdev/bdev_raid.sh@287 -- # killprocess 125096 00:15:54.213 00:58:28 -- common/autotest_common.sh@936 -- # '[' -z 125096 ']' 00:15:54.213 00:58:28 -- common/autotest_common.sh@940 -- # kill -0 125096 00:15:54.213 00:58:28 -- common/autotest_common.sh@941 -- # uname 00:15:54.213 00:58:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:54.213 00:58:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125096 00:15:54.213 killing process with pid 125096 00:15:54.213 00:58:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:54.213 00:58:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:54.213 00:58:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125096' 00:15:54.213 00:58:28 -- common/autotest_common.sh@955 -- # kill 125096 00:15:54.213 [2024-11-18 00:58:28.561033] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.213 00:58:28 -- common/autotest_common.sh@960 -- # wait 125096 00:15:54.213 [2024-11-18 00:58:28.561120] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.779 00:58:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:54.779 00:15:54.779 real 0m9.873s 00:15:54.779 user 0m17.117s 00:15:54.779 sys 0m1.815s 00:15:54.779 00:58:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:54.779 00:58:28 -- common/autotest_common.sh@10 -- # set +x 00:15:54.779 ************************************ 00:15:54.779 END TEST raid_state_function_test_sb 00:15:54.779 ************************************ 00:15:54.779 00:58:29 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:54.779 00:58:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:54.779 00:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.779 00:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:54.779 ************************************ 00:15:54.779 START TEST raid_superblock_test 00:15:54.779 ************************************ 00:15:54.779 00:58:29 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2 00:15:54.779 00:58:29 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:54.779 00:58:29 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:54.779 00:58:29 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:54.779 00:58:29 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:54.779 00:58:29 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:54.779 00:58:29 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@357 -- # raid_pid=125407 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125407 /var/tmp/spdk-raid.sock 00:15:54.780 00:58:29 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:54.780 00:58:29 -- common/autotest_common.sh@829 -- # '[' -z 125407 ']' 00:15:54.780 00:58:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:54.780 00:58:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.780 00:58:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:54.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:54.780 00:58:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.780 00:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:54.780 [2024-11-18 00:58:29.116022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:54.780 [2024-11-18 00:58:29.116333] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125407 ] 00:15:55.037 [2024-11-18 00:58:29.266137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.037 [2024-11-18 00:58:29.346922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.037 [2024-11-18 00:58:29.425483] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.603 00:58:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.603 00:58:29 -- common/autotest_common.sh@862 -- # return 0 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.603 00:58:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:55.862 malloc1 00:15:56.120 00:58:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.120 [2024-11-18 00:58:30.490426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.120 [2024-11-18 00:58:30.490561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.121 [2024-11-18 00:58:30.490609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:56.121 [2024-11-18 00:58:30.490679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.121 [2024-11-18 00:58:30.493725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.121 [2024-11-18 00:58:30.493798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.121 pt1 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.121 00:58:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:56.379 malloc2 00:15:56.379 00:58:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.638 [2024-11-18 00:58:30.890250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.638 [2024-11-18 00:58:30.890361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.638 [2024-11-18 00:58:30.890402] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:56.638 [2024-11-18 00:58:30.890454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.638 [2024-11-18 00:58:30.893165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.638 [2024-11-18 00:58:30.893225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.638 pt2 00:15:56.638 00:58:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:56.638 00:58:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:56.638 00:58:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:56.896 [2024-11-18 00:58:31.138378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.896 [2024-11-18 00:58:31.140864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.896 [2024-11-18 00:58:31.141103] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:15:56.896 [2024-11-18 00:58:31.141115] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:56.896 [2024-11-18 00:58:31.141266] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:56.896 [2024-11-18 00:58:31.141712] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:15:56.896 [2024-11-18 00:58:31.141731] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:15:56.896 [2024-11-18 00:58:31.141897] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.896 00:58:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.154 00:58:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.155 "name": "raid_bdev1", 00:15:57.155 "uuid": "ba8c6649-3139-4d95-8843-09db2d5627d4", 00:15:57.155 "strip_size_kb": 0, 00:15:57.155 "state": "online", 00:15:57.155 "raid_level": "raid1", 00:15:57.155 "superblock": true, 00:15:57.155 "num_base_bdevs": 2, 00:15:57.155 "num_base_bdevs_discovered": 2, 00:15:57.155 "num_base_bdevs_operational": 2, 00:15:57.155 "base_bdevs_list": [ 00:15:57.155 { 00:15:57.155 "name": "pt1", 00:15:57.155 "uuid": "1fdb5508-1d0f-5134-97a4-0b780b3ee0a4", 00:15:57.155 "is_configured": true, 00:15:57.155 "data_offset": 2048, 00:15:57.155 "data_size": 63488 00:15:57.155 }, 00:15:57.155 { 00:15:57.155 "name": "pt2", 00:15:57.155 "uuid": "3592556e-3be8-58b5-a358-42441448cd47", 00:15:57.155 "is_configured": true, 00:15:57.155 "data_offset": 2048, 00:15:57.155 "data_size": 63488 00:15:57.155 } 00:15:57.155 ] 00:15:57.155 }' 00:15:57.155 00:58:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.155 00:58:31 -- common/autotest_common.sh@10 -- # set +x 00:15:57.722 00:58:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:57.722 00:58:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:57.980 [2024-11-18 00:58:32.174701] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.980 00:58:32 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ba8c6649-3139-4d95-8843-09db2d5627d4 00:15:57.980 00:58:32 -- bdev/bdev_raid.sh@380 -- # '[' -z ba8c6649-3139-4d95-8843-09db2d5627d4 ']' 00:15:57.980 00:58:32 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:58.239 [2024-11-18 00:58:32.442524] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.239 [2024-11-18 00:58:32.442564] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.239 [2024-11-18 00:58:32.442690] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.239 [2024-11-18 00:58:32.442779] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.239 [2024-11-18 00:58:32.442790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:15:58.239 00:58:32 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.239 00:58:32 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:58.498 00:58:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:58.498 00:58:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:58.498 00:58:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.498 00:58:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:58.756 00:58:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.756 00:58:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:59.016 00:58:33 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:59.016 00:58:33 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:59.016 00:58:33 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:59.016 00:58:33 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.016 00:58:33 -- common/autotest_common.sh@650 -- # local es=0 00:15:59.016 00:58:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.016 00:58:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.016 00:58:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.016 00:58:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.016 00:58:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.016 00:58:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.016 00:58:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.016 00:58:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.016 00:58:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:59.016 00:58:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.276 [2024-11-18 00:58:33.562864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.276 [2024-11-18 00:58:33.565319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.276 [2024-11-18 00:58:33.565396] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:59.276 [2024-11-18 00:58:33.565486] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:59.276 [2024-11-18 00:58:33.565527] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.276 [2024-11-18 00:58:33.565538] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:59.276 request: 00:15:59.276 { 00:15:59.276 "name": "raid_bdev1", 00:15:59.276 "raid_level": "raid1", 00:15:59.276 "base_bdevs": [ 00:15:59.276 "malloc1", 00:15:59.276 "malloc2" 00:15:59.276 ], 00:15:59.276 "superblock": false, 00:15:59.276 "method": "bdev_raid_create", 00:15:59.276 "req_id": 1 00:15:59.276 } 00:15:59.276 Got JSON-RPC error response 00:15:59.276 response: 00:15:59.276 { 00:15:59.276 "code": -17, 00:15:59.276 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.276 } 00:15:59.276 00:58:33 -- common/autotest_common.sh@653 -- # es=1 00:15:59.276 00:58:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.276 00:58:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.276 00:58:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.276 00:58:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.276 00:58:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:59.535 00:58:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:59.535 00:58:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:59.535 00:58:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.794 [2024-11-18 00:58:33.954865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.794 [2024-11-18 00:58:33.955024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.794 [2024-11-18 00:58:33.955061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:59.794 [2024-11-18 00:58:33.955090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.794 [2024-11-18 00:58:33.957834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.795 [2024-11-18 00:58:33.957894] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.795 [2024-11-18 00:58:33.957983] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:59.795 [2024-11-18 00:58:33.958061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.795 pt1 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.795 00:58:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.054 00:58:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.054 "name": "raid_bdev1", 00:16:00.054 "uuid": "ba8c6649-3139-4d95-8843-09db2d5627d4", 00:16:00.054 "strip_size_kb": 0, 00:16:00.054 "state": "configuring", 00:16:00.054 "raid_level": "raid1", 00:16:00.054 "superblock": true, 00:16:00.054 "num_base_bdevs": 2, 00:16:00.054 "num_base_bdevs_discovered": 1, 00:16:00.054 "num_base_bdevs_operational": 2, 00:16:00.054 "base_bdevs_list": [ 00:16:00.054 { 00:16:00.054 "name": "pt1", 00:16:00.054 "uuid": "1fdb5508-1d0f-5134-97a4-0b780b3ee0a4", 00:16:00.054 "is_configured": true, 00:16:00.054 "data_offset": 2048, 00:16:00.054 "data_size": 63488 00:16:00.054 }, 00:16:00.054 { 00:16:00.054 "name": null, 00:16:00.054 "uuid": "3592556e-3be8-58b5-a358-42441448cd47", 00:16:00.054 "is_configured": false, 00:16:00.054 "data_offset": 2048, 00:16:00.054 "data_size": 63488 00:16:00.054 } 00:16:00.054 ] 00:16:00.054 }' 00:16:00.054 00:58:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.054 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:00.622 00:58:34 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:00.622 00:58:34 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:00.622 00:58:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:00.622 00:58:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.881 [2024-11-18 00:58:35.115113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.881 [2024-11-18 00:58:35.115254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.881 [2024-11-18 00:58:35.115291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:00.881 [2024-11-18 00:58:35.115327] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.881 [2024-11-18 00:58:35.115812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.881 [2024-11-18 00:58:35.115855] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.881 [2024-11-18 00:58:35.115938] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:00.881 [2024-11-18 00:58:35.115966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.881 [2024-11-18 00:58:35.116089] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:00.881 [2024-11-18 00:58:35.116099] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.881 [2024-11-18 00:58:35.116180] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:00.881 [2024-11-18 00:58:35.116501] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:00.881 [2024-11-18 00:58:35.116516] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:00.881 [2024-11-18 00:58:35.116612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.881 pt2 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.881 00:58:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.141 00:58:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.141 "name": "raid_bdev1", 00:16:01.141 "uuid": "ba8c6649-3139-4d95-8843-09db2d5627d4", 00:16:01.141 "strip_size_kb": 0, 00:16:01.141 "state": "online", 00:16:01.141 "raid_level": "raid1", 00:16:01.141 "superblock": true, 00:16:01.141 "num_base_bdevs": 2, 00:16:01.141 "num_base_bdevs_discovered": 2, 00:16:01.141 "num_base_bdevs_operational": 2, 00:16:01.141 "base_bdevs_list": [ 00:16:01.141 { 00:16:01.141 "name": "pt1", 00:16:01.141 "uuid": "1fdb5508-1d0f-5134-97a4-0b780b3ee0a4", 00:16:01.141 "is_configured": true, 00:16:01.141 "data_offset": 2048, 00:16:01.141 "data_size": 63488 00:16:01.141 }, 00:16:01.141 { 00:16:01.141 "name": "pt2", 00:16:01.141 "uuid": "3592556e-3be8-58b5-a358-42441448cd47", 00:16:01.141 "is_configured": true, 00:16:01.141 "data_offset": 2048, 00:16:01.141 "data_size": 63488 00:16:01.141 } 00:16:01.141 ] 00:16:01.141 }' 00:16:01.141 00:58:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.141 00:58:35 -- common/autotest_common.sh@10 -- # set +x 00:16:01.710 00:58:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:01.710 00:58:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:01.968 [2024-11-18 00:58:36.195512] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.968 00:58:36 -- bdev/bdev_raid.sh@430 -- # '[' ba8c6649-3139-4d95-8843-09db2d5627d4 '!=' ba8c6649-3139-4d95-8843-09db2d5627d4 ']' 00:16:01.968 00:58:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:01.968 00:58:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:01.968 00:58:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:01.968 00:58:36 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:02.227 [2024-11-18 00:58:36.487429] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.227 00:58:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.486 00:58:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.486 "name": "raid_bdev1", 00:16:02.486 "uuid": "ba8c6649-3139-4d95-8843-09db2d5627d4", 00:16:02.486 "strip_size_kb": 0, 00:16:02.486 "state": "online", 00:16:02.486 "raid_level": "raid1", 00:16:02.486 "superblock": true, 00:16:02.486 "num_base_bdevs": 2, 00:16:02.486 "num_base_bdevs_discovered": 1, 00:16:02.486 "num_base_bdevs_operational": 1, 00:16:02.486 "base_bdevs_list": [ 00:16:02.486 { 00:16:02.486 "name": null, 00:16:02.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.486 "is_configured": false, 00:16:02.486 "data_offset": 2048, 00:16:02.486 "data_size": 63488 00:16:02.486 }, 00:16:02.486 { 00:16:02.486 "name": "pt2", 00:16:02.486 "uuid": "3592556e-3be8-58b5-a358-42441448cd47", 00:16:02.486 "is_configured": true, 00:16:02.486 "data_offset": 2048, 00:16:02.486 "data_size": 63488 00:16:02.486 } 00:16:02.486 ] 00:16:02.486 }' 00:16:02.486 00:58:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.486 00:58:36 -- common/autotest_common.sh@10 -- # set +x 00:16:03.053 00:58:37 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:03.311 [2024-11-18 00:58:37.551578] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.311 [2024-11-18 00:58:37.551622] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.311 [2024-11-18 00:58:37.551700] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.311 [2024-11-18 00:58:37.551757] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.311 [2024-11-18 00:58:37.551767] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:03.311 00:58:37 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:03.311 00:58:37 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.569 00:58:37 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:03.569 00:58:37 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:03.569 00:58:37 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:03.569 00:58:37 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:03.569 00:58:37 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:03.827 00:58:38 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:03.828 00:58:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:03.828 00:58:38 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:03.828 00:58:38 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:03.828 00:58:38 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:03.828 00:58:38 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.085 [2024-11-18 00:58:38.283700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.085 [2024-11-18 00:58:38.283838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.085 [2024-11-18 00:58:38.283872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:04.085 [2024-11-18 00:58:38.283901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.085 [2024-11-18 00:58:38.286727] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.085 [2024-11-18 00:58:38.286807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.085 [2024-11-18 00:58:38.286898] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:04.085 [2024-11-18 00:58:38.286934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.085 [2024-11-18 00:58:38.287030] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:04.085 [2024-11-18 00:58:38.287039] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.085 [2024-11-18 00:58:38.287110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:04.085 [2024-11-18 00:58:38.287430] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:04.085 [2024-11-18 00:58:38.287448] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:04.085 [2024-11-18 00:58:38.287544] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.085 pt2 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.085 00:58:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.343 00:58:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.343 "name": "raid_bdev1", 00:16:04.343 "uuid": "ba8c6649-3139-4d95-8843-09db2d5627d4", 00:16:04.343 "strip_size_kb": 0, 00:16:04.343 "state": "online", 00:16:04.343 "raid_level": "raid1", 00:16:04.343 "superblock": true, 00:16:04.343 "num_base_bdevs": 2, 00:16:04.343 "num_base_bdevs_discovered": 1, 00:16:04.343 "num_base_bdevs_operational": 1, 00:16:04.343 "base_bdevs_list": [ 00:16:04.343 { 00:16:04.343 "name": null, 00:16:04.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.343 "is_configured": false, 00:16:04.343 "data_offset": 2048, 00:16:04.343 "data_size": 63488 00:16:04.343 }, 00:16:04.343 { 00:16:04.343 "name": "pt2", 00:16:04.343 "uuid": "3592556e-3be8-58b5-a358-42441448cd47", 00:16:04.343 "is_configured": true, 00:16:04.343 "data_offset": 2048, 00:16:04.343 "data_size": 63488 00:16:04.343 } 00:16:04.343 ] 00:16:04.343 }' 00:16:04.343 00:58:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.343 00:58:38 -- common/autotest_common.sh@10 -- # set +x 00:16:04.911 00:58:39 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:04.911 00:58:39 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:04.911 00:58:39 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:05.170 [2024-11-18 00:58:39.380089] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.170 00:58:39 -- bdev/bdev_raid.sh@506 -- # '[' ba8c6649-3139-4d95-8843-09db2d5627d4 '!=' ba8c6649-3139-4d95-8843-09db2d5627d4 ']' 00:16:05.170 00:58:39 -- bdev/bdev_raid.sh@511 -- # killprocess 125407 00:16:05.170 00:58:39 -- common/autotest_common.sh@936 -- # '[' -z 125407 ']' 00:16:05.170 00:58:39 -- common/autotest_common.sh@940 -- # kill -0 125407 00:16:05.170 00:58:39 -- common/autotest_common.sh@941 -- # uname 00:16:05.170 00:58:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.170 00:58:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125407 00:16:05.170 00:58:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.170 killing process with pid 125407 00:16:05.170 00:58:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.170 00:58:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125407' 00:16:05.170 00:58:39 -- common/autotest_common.sh@955 -- # kill 125407 00:16:05.170 00:58:39 -- common/autotest_common.sh@960 -- # wait 125407 00:16:05.170 [2024-11-18 00:58:39.438774] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.170 [2024-11-18 00:58:39.438864] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.170 [2024-11-18 00:58:39.438923] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.170 [2024-11-18 00:58:39.438933] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:05.170 [2024-11-18 00:58:39.480936] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:05.739 00:16:05.739 real 0m10.843s 00:16:05.739 user 0m19.153s 00:16:05.739 sys 0m2.054s 00:16:05.739 00:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:05.739 00:58:39 -- common/autotest_common.sh@10 -- # set +x 00:16:05.739 ************************************ 00:16:05.739 END TEST raid_superblock_test 00:16:05.739 ************************************ 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:05.739 00:58:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:05.739 00:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:05.739 00:58:39 -- common/autotest_common.sh@10 -- # set +x 00:16:05.739 ************************************ 00:16:05.739 START TEST raid_state_function_test 00:16:05.739 ************************************ 00:16:05.739 00:58:39 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=125758 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125758' 00:16:05.739 Process raid pid: 125758 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125758 /var/tmp/spdk-raid.sock 00:16:05.739 00:58:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:05.739 00:58:39 -- common/autotest_common.sh@829 -- # '[' -z 125758 ']' 00:16:05.739 00:58:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:05.739 00:58:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:05.739 00:58:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:05.739 00:58:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.739 00:58:39 -- common/autotest_common.sh@10 -- # set +x 00:16:05.739 [2024-11-18 00:58:40.037850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:05.739 [2024-11-18 00:58:40.038113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.998 [2024-11-18 00:58:40.187074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.998 [2024-11-18 00:58:40.275033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.998 [2024-11-18 00:58:40.353975] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.934 00:58:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.934 00:58:41 -- common/autotest_common.sh@862 -- # return 0 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:06.934 [2024-11-18 00:58:41.231466] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.934 [2024-11-18 00:58:41.231582] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.934 [2024-11-18 00:58:41.231595] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.934 [2024-11-18 00:58:41.231615] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.934 [2024-11-18 00:58:41.231622] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.934 [2024-11-18 00:58:41.231674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.934 00:58:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.198 00:58:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.198 "name": "Existed_Raid", 00:16:07.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.198 "strip_size_kb": 64, 00:16:07.198 "state": "configuring", 00:16:07.198 "raid_level": "raid0", 00:16:07.198 "superblock": false, 00:16:07.198 "num_base_bdevs": 3, 00:16:07.198 "num_base_bdevs_discovered": 0, 00:16:07.198 "num_base_bdevs_operational": 3, 00:16:07.199 "base_bdevs_list": [ 00:16:07.199 { 00:16:07.199 "name": "BaseBdev1", 00:16:07.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.199 "is_configured": false, 00:16:07.199 "data_offset": 0, 00:16:07.199 "data_size": 0 00:16:07.199 }, 00:16:07.199 { 00:16:07.199 "name": "BaseBdev2", 00:16:07.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.199 "is_configured": false, 00:16:07.199 "data_offset": 0, 00:16:07.199 "data_size": 0 00:16:07.199 }, 00:16:07.199 { 00:16:07.199 "name": "BaseBdev3", 00:16:07.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.199 "is_configured": false, 00:16:07.199 "data_offset": 0, 00:16:07.199 "data_size": 0 00:16:07.199 } 00:16:07.199 ] 00:16:07.199 }' 00:16:07.199 00:58:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.199 00:58:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.794 00:58:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.068 [2024-11-18 00:58:42.263557] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.068 [2024-11-18 00:58:42.263628] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:08.068 00:58:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:08.326 [2024-11-18 00:58:42.535605] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.326 [2024-11-18 00:58:42.535697] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.326 [2024-11-18 00:58:42.535708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.326 [2024-11-18 00:58:42.535734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.326 [2024-11-18 00:58:42.535740] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.326 [2024-11-18 00:58:42.535767] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.326 00:58:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.585 [2024-11-18 00:58:42.751731] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.585 BaseBdev1 00:16:08.585 00:58:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:08.585 00:58:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:08.585 00:58:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.585 00:58:42 -- common/autotest_common.sh@899 -- # local i 00:16:08.585 00:58:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.585 00:58:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.585 00:58:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.843 00:58:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.843 [ 00:16:08.843 { 00:16:08.843 "name": "BaseBdev1", 00:16:08.843 "aliases": [ 00:16:08.843 "db822615-f087-435f-8f37-7f9d3cdc97a4" 00:16:08.843 ], 00:16:08.843 "product_name": "Malloc disk", 00:16:08.843 "block_size": 512, 00:16:08.843 "num_blocks": 65536, 00:16:08.843 "uuid": "db822615-f087-435f-8f37-7f9d3cdc97a4", 00:16:08.843 "assigned_rate_limits": { 00:16:08.843 "rw_ios_per_sec": 0, 00:16:08.843 "rw_mbytes_per_sec": 0, 00:16:08.843 "r_mbytes_per_sec": 0, 00:16:08.843 "w_mbytes_per_sec": 0 00:16:08.843 }, 00:16:08.843 "claimed": true, 00:16:08.843 "claim_type": "exclusive_write", 00:16:08.843 "zoned": false, 00:16:08.843 "supported_io_types": { 00:16:08.843 "read": true, 00:16:08.843 "write": true, 00:16:08.843 "unmap": true, 00:16:08.843 "write_zeroes": true, 00:16:08.843 "flush": true, 00:16:08.843 "reset": true, 00:16:08.843 "compare": false, 00:16:08.843 "compare_and_write": false, 00:16:08.843 "abort": true, 00:16:08.843 "nvme_admin": false, 00:16:08.843 "nvme_io": false 00:16:08.843 }, 00:16:08.843 "memory_domains": [ 00:16:08.843 { 00:16:08.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.843 "dma_device_type": 2 00:16:08.843 } 00:16:08.843 ], 00:16:08.843 "driver_specific": {} 00:16:08.843 } 00:16:08.843 ] 00:16:08.843 00:58:43 -- common/autotest_common.sh@905 -- # return 0 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.843 00:58:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.844 00:58:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.844 00:58:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.103 00:58:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.103 "name": "Existed_Raid", 00:16:09.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.103 "strip_size_kb": 64, 00:16:09.103 "state": "configuring", 00:16:09.103 "raid_level": "raid0", 00:16:09.103 "superblock": false, 00:16:09.103 "num_base_bdevs": 3, 00:16:09.103 "num_base_bdevs_discovered": 1, 00:16:09.103 "num_base_bdevs_operational": 3, 00:16:09.103 "base_bdevs_list": [ 00:16:09.103 { 00:16:09.103 "name": "BaseBdev1", 00:16:09.103 "uuid": "db822615-f087-435f-8f37-7f9d3cdc97a4", 00:16:09.103 "is_configured": true, 00:16:09.103 "data_offset": 0, 00:16:09.103 "data_size": 65536 00:16:09.103 }, 00:16:09.103 { 00:16:09.103 "name": "BaseBdev2", 00:16:09.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.103 "is_configured": false, 00:16:09.103 "data_offset": 0, 00:16:09.103 "data_size": 0 00:16:09.103 }, 00:16:09.103 { 00:16:09.103 "name": "BaseBdev3", 00:16:09.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.103 "is_configured": false, 00:16:09.103 "data_offset": 0, 00:16:09.103 "data_size": 0 00:16:09.103 } 00:16:09.103 ] 00:16:09.103 }' 00:16:09.103 00:58:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.103 00:58:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.671 00:58:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:09.930 [2024-11-18 00:58:44.240053] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.930 [2024-11-18 00:58:44.240150] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:09.930 00:58:44 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:09.930 00:58:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:10.189 [2024-11-18 00:58:44.492208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.189 [2024-11-18 00:58:44.494765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.189 [2024-11-18 00:58:44.494853] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.189 [2024-11-18 00:58:44.494864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.189 [2024-11-18 00:58:44.494891] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.189 00:58:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.448 00:58:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.448 "name": "Existed_Raid", 00:16:10.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.448 "strip_size_kb": 64, 00:16:10.448 "state": "configuring", 00:16:10.448 "raid_level": "raid0", 00:16:10.448 "superblock": false, 00:16:10.448 "num_base_bdevs": 3, 00:16:10.448 "num_base_bdevs_discovered": 1, 00:16:10.448 "num_base_bdevs_operational": 3, 00:16:10.448 "base_bdevs_list": [ 00:16:10.448 { 00:16:10.448 "name": "BaseBdev1", 00:16:10.448 "uuid": "db822615-f087-435f-8f37-7f9d3cdc97a4", 00:16:10.448 "is_configured": true, 00:16:10.448 "data_offset": 0, 00:16:10.448 "data_size": 65536 00:16:10.448 }, 00:16:10.448 { 00:16:10.448 "name": "BaseBdev2", 00:16:10.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.448 "is_configured": false, 00:16:10.448 "data_offset": 0, 00:16:10.448 "data_size": 0 00:16:10.448 }, 00:16:10.448 { 00:16:10.448 "name": "BaseBdev3", 00:16:10.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.448 "is_configured": false, 00:16:10.448 "data_offset": 0, 00:16:10.448 "data_size": 0 00:16:10.448 } 00:16:10.448 ] 00:16:10.448 }' 00:16:10.448 00:58:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.448 00:58:44 -- common/autotest_common.sh@10 -- # set +x 00:16:11.015 00:58:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:11.273 [2024-11-18 00:58:45.536463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.273 BaseBdev2 00:16:11.273 00:58:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:11.273 00:58:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:11.273 00:58:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:11.273 00:58:45 -- common/autotest_common.sh@899 -- # local i 00:16:11.273 00:58:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:11.273 00:58:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:11.273 00:58:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.533 00:58:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.792 [ 00:16:11.792 { 00:16:11.792 "name": "BaseBdev2", 00:16:11.792 "aliases": [ 00:16:11.792 "08832c40-2542-4148-a4a7-a02f2db9be84" 00:16:11.792 ], 00:16:11.792 "product_name": "Malloc disk", 00:16:11.792 "block_size": 512, 00:16:11.792 "num_blocks": 65536, 00:16:11.792 "uuid": "08832c40-2542-4148-a4a7-a02f2db9be84", 00:16:11.792 "assigned_rate_limits": { 00:16:11.792 "rw_ios_per_sec": 0, 00:16:11.792 "rw_mbytes_per_sec": 0, 00:16:11.792 "r_mbytes_per_sec": 0, 00:16:11.792 "w_mbytes_per_sec": 0 00:16:11.792 }, 00:16:11.792 "claimed": true, 00:16:11.792 "claim_type": "exclusive_write", 00:16:11.792 "zoned": false, 00:16:11.792 "supported_io_types": { 00:16:11.792 "read": true, 00:16:11.792 "write": true, 00:16:11.792 "unmap": true, 00:16:11.792 "write_zeroes": true, 00:16:11.792 "flush": true, 00:16:11.792 "reset": true, 00:16:11.792 "compare": false, 00:16:11.792 "compare_and_write": false, 00:16:11.792 "abort": true, 00:16:11.792 "nvme_admin": false, 00:16:11.792 "nvme_io": false 00:16:11.792 }, 00:16:11.792 "memory_domains": [ 00:16:11.792 { 00:16:11.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.792 "dma_device_type": 2 00:16:11.792 } 00:16:11.792 ], 00:16:11.792 "driver_specific": {} 00:16:11.792 } 00:16:11.792 ] 00:16:11.792 00:58:46 -- common/autotest_common.sh@905 -- # return 0 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.792 00:58:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.051 00:58:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.051 "name": "Existed_Raid", 00:16:12.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.051 "strip_size_kb": 64, 00:16:12.051 "state": "configuring", 00:16:12.051 "raid_level": "raid0", 00:16:12.051 "superblock": false, 00:16:12.051 "num_base_bdevs": 3, 00:16:12.051 "num_base_bdevs_discovered": 2, 00:16:12.051 "num_base_bdevs_operational": 3, 00:16:12.051 "base_bdevs_list": [ 00:16:12.051 { 00:16:12.051 "name": "BaseBdev1", 00:16:12.051 "uuid": "db822615-f087-435f-8f37-7f9d3cdc97a4", 00:16:12.051 "is_configured": true, 00:16:12.051 "data_offset": 0, 00:16:12.051 "data_size": 65536 00:16:12.051 }, 00:16:12.051 { 00:16:12.051 "name": "BaseBdev2", 00:16:12.051 "uuid": "08832c40-2542-4148-a4a7-a02f2db9be84", 00:16:12.051 "is_configured": true, 00:16:12.051 "data_offset": 0, 00:16:12.051 "data_size": 65536 00:16:12.051 }, 00:16:12.051 { 00:16:12.051 "name": "BaseBdev3", 00:16:12.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.051 "is_configured": false, 00:16:12.051 "data_offset": 0, 00:16:12.051 "data_size": 0 00:16:12.051 } 00:16:12.051 ] 00:16:12.051 }' 00:16:12.051 00:58:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.051 00:58:46 -- common/autotest_common.sh@10 -- # set +x 00:16:12.618 00:58:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.877 [2024-11-18 00:58:47.148245] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.877 [2024-11-18 00:58:47.148311] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:12.877 [2024-11-18 00:58:47.148320] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:12.877 [2024-11-18 00:58:47.148490] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:12.877 [2024-11-18 00:58:47.148904] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:12.877 [2024-11-18 00:58:47.148923] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:12.877 [2024-11-18 00:58:47.149189] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.877 BaseBdev3 00:16:12.877 00:58:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:12.877 00:58:47 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:12.877 00:58:47 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.877 00:58:47 -- common/autotest_common.sh@899 -- # local i 00:16:12.877 00:58:47 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.877 00:58:47 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.877 00:58:47 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.136 00:58:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.395 [ 00:16:13.395 { 00:16:13.395 "name": "BaseBdev3", 00:16:13.395 "aliases": [ 00:16:13.395 "d5cb986f-df3a-4522-a73a-ffae085addf9" 00:16:13.395 ], 00:16:13.395 "product_name": "Malloc disk", 00:16:13.395 "block_size": 512, 00:16:13.395 "num_blocks": 65536, 00:16:13.395 "uuid": "d5cb986f-df3a-4522-a73a-ffae085addf9", 00:16:13.395 "assigned_rate_limits": { 00:16:13.395 "rw_ios_per_sec": 0, 00:16:13.395 "rw_mbytes_per_sec": 0, 00:16:13.395 "r_mbytes_per_sec": 0, 00:16:13.395 "w_mbytes_per_sec": 0 00:16:13.395 }, 00:16:13.395 "claimed": true, 00:16:13.395 "claim_type": "exclusive_write", 00:16:13.395 "zoned": false, 00:16:13.395 "supported_io_types": { 00:16:13.395 "read": true, 00:16:13.395 "write": true, 00:16:13.395 "unmap": true, 00:16:13.395 "write_zeroes": true, 00:16:13.395 "flush": true, 00:16:13.395 "reset": true, 00:16:13.395 "compare": false, 00:16:13.395 "compare_and_write": false, 00:16:13.395 "abort": true, 00:16:13.395 "nvme_admin": false, 00:16:13.395 "nvme_io": false 00:16:13.395 }, 00:16:13.395 "memory_domains": [ 00:16:13.395 { 00:16:13.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.395 "dma_device_type": 2 00:16:13.395 } 00:16:13.395 ], 00:16:13.395 "driver_specific": {} 00:16:13.395 } 00:16:13.395 ] 00:16:13.395 00:58:47 -- common/autotest_common.sh@905 -- # return 0 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.395 "name": "Existed_Raid", 00:16:13.395 "uuid": "926b615e-d398-4040-98dc-1aa9d124bc48", 00:16:13.395 "strip_size_kb": 64, 00:16:13.395 "state": "online", 00:16:13.395 "raid_level": "raid0", 00:16:13.395 "superblock": false, 00:16:13.395 "num_base_bdevs": 3, 00:16:13.395 "num_base_bdevs_discovered": 3, 00:16:13.395 "num_base_bdevs_operational": 3, 00:16:13.395 "base_bdevs_list": [ 00:16:13.395 { 00:16:13.395 "name": "BaseBdev1", 00:16:13.395 "uuid": "db822615-f087-435f-8f37-7f9d3cdc97a4", 00:16:13.395 "is_configured": true, 00:16:13.395 "data_offset": 0, 00:16:13.395 "data_size": 65536 00:16:13.395 }, 00:16:13.395 { 00:16:13.395 "name": "BaseBdev2", 00:16:13.395 "uuid": "08832c40-2542-4148-a4a7-a02f2db9be84", 00:16:13.395 "is_configured": true, 00:16:13.395 "data_offset": 0, 00:16:13.395 "data_size": 65536 00:16:13.395 }, 00:16:13.395 { 00:16:13.395 "name": "BaseBdev3", 00:16:13.395 "uuid": "d5cb986f-df3a-4522-a73a-ffae085addf9", 00:16:13.395 "is_configured": true, 00:16:13.395 "data_offset": 0, 00:16:13.395 "data_size": 65536 00:16:13.395 } 00:16:13.395 ] 00:16:13.395 }' 00:16:13.395 00:58:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.395 00:58:47 -- common/autotest_common.sh@10 -- # set +x 00:16:13.964 00:58:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:14.222 [2024-11-18 00:58:48.576703] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.222 [2024-11-18 00:58:48.576758] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.222 [2024-11-18 00:58:48.576837] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.481 00:58:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.740 00:58:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.740 "name": "Existed_Raid", 00:16:14.740 "uuid": "926b615e-d398-4040-98dc-1aa9d124bc48", 00:16:14.740 "strip_size_kb": 64, 00:16:14.740 "state": "offline", 00:16:14.740 "raid_level": "raid0", 00:16:14.740 "superblock": false, 00:16:14.740 "num_base_bdevs": 3, 00:16:14.740 "num_base_bdevs_discovered": 2, 00:16:14.740 "num_base_bdevs_operational": 2, 00:16:14.740 "base_bdevs_list": [ 00:16:14.740 { 00:16:14.740 "name": null, 00:16:14.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.740 "is_configured": false, 00:16:14.740 "data_offset": 0, 00:16:14.740 "data_size": 65536 00:16:14.740 }, 00:16:14.740 { 00:16:14.740 "name": "BaseBdev2", 00:16:14.740 "uuid": "08832c40-2542-4148-a4a7-a02f2db9be84", 00:16:14.740 "is_configured": true, 00:16:14.740 "data_offset": 0, 00:16:14.740 "data_size": 65536 00:16:14.740 }, 00:16:14.740 { 00:16:14.740 "name": "BaseBdev3", 00:16:14.740 "uuid": "d5cb986f-df3a-4522-a73a-ffae085addf9", 00:16:14.740 "is_configured": true, 00:16:14.740 "data_offset": 0, 00:16:14.740 "data_size": 65536 00:16:14.740 } 00:16:14.740 ] 00:16:14.740 }' 00:16:14.740 00:58:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.740 00:58:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.321 00:58:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:15.321 00:58:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.321 00:58:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.321 00:58:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.580 00:58:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:15.580 00:58:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.580 00:58:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:15.838 [2024-11-18 00:58:50.014455] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.838 00:58:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:15.838 00:58:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.838 00:58:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.838 00:58:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:16.097 00:58:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:16.097 00:58:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.097 00:58:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:16.356 [2024-11-18 00:58:50.508325] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.356 [2024-11-18 00:58:50.508402] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:16.356 00:58:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:16.356 00:58:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.356 00:58:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.356 00:58:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:16.615 00:58:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:16.615 00:58:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:16.615 00:58:50 -- bdev/bdev_raid.sh@287 -- # killprocess 125758 00:16:16.615 00:58:50 -- common/autotest_common.sh@936 -- # '[' -z 125758 ']' 00:16:16.615 00:58:50 -- common/autotest_common.sh@940 -- # kill -0 125758 00:16:16.615 00:58:50 -- common/autotest_common.sh@941 -- # uname 00:16:16.615 00:58:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.615 00:58:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125758 00:16:16.615 00:58:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.615 00:58:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.615 00:58:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125758' 00:16:16.615 killing process with pid 125758 00:16:16.615 00:58:50 -- common/autotest_common.sh@955 -- # kill 125758 00:16:16.615 [2024-11-18 00:58:50.856068] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.615 [2024-11-18 00:58:50.856181] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.615 00:58:50 -- common/autotest_common.sh@960 -- # wait 125758 00:16:16.873 00:58:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:16.873 00:16:16.873 real 0m11.292s 00:16:16.873 user 0m19.737s 00:16:16.873 sys 0m2.170s 00:16:16.873 00:58:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:16.873 00:58:51 -- common/autotest_common.sh@10 -- # set +x 00:16:16.873 ************************************ 00:16:16.873 END TEST raid_state_function_test 00:16:16.873 ************************************ 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:17.132 00:58:51 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:17.132 00:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.132 00:58:51 -- common/autotest_common.sh@10 -- # set +x 00:16:17.132 ************************************ 00:16:17.132 START TEST raid_state_function_test_sb 00:16:17.132 ************************************ 00:16:17.132 00:58:51 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=126129 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:17.132 Process raid pid: 126129 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126129' 00:16:17.132 00:58:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126129 /var/tmp/spdk-raid.sock 00:16:17.132 00:58:51 -- common/autotest_common.sh@829 -- # '[' -z 126129 ']' 00:16:17.132 00:58:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.132 00:58:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.132 00:58:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.132 00:58:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.132 00:58:51 -- common/autotest_common.sh@10 -- # set +x 00:16:17.132 [2024-11-18 00:58:51.398039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:17.132 [2024-11-18 00:58:51.399219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.391 [2024-11-18 00:58:51.558456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.391 [2024-11-18 00:58:51.651699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.391 [2024-11-18 00:58:51.736660] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.958 00:58:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.958 00:58:52 -- common/autotest_common.sh@862 -- # return 0 00:16:17.958 00:58:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:18.216 [2024-11-18 00:58:52.575855] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.216 [2024-11-18 00:58:52.575956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.216 [2024-11-18 00:58:52.575967] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.216 [2024-11-18 00:58:52.575987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.216 [2024-11-18 00:58:52.575994] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.216 [2024-11-18 00:58:52.576039] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.216 00:58:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.476 00:58:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.476 "name": "Existed_Raid", 00:16:18.476 "uuid": "1795a18e-90c0-4724-b424-f52632e067af", 00:16:18.476 "strip_size_kb": 64, 00:16:18.476 "state": "configuring", 00:16:18.476 "raid_level": "raid0", 00:16:18.476 "superblock": true, 00:16:18.476 "num_base_bdevs": 3, 00:16:18.476 "num_base_bdevs_discovered": 0, 00:16:18.476 "num_base_bdevs_operational": 3, 00:16:18.476 "base_bdevs_list": [ 00:16:18.476 { 00:16:18.476 "name": "BaseBdev1", 00:16:18.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.476 "is_configured": false, 00:16:18.476 "data_offset": 0, 00:16:18.476 "data_size": 0 00:16:18.476 }, 00:16:18.476 { 00:16:18.476 "name": "BaseBdev2", 00:16:18.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.476 "is_configured": false, 00:16:18.476 "data_offset": 0, 00:16:18.476 "data_size": 0 00:16:18.476 }, 00:16:18.476 { 00:16:18.476 "name": "BaseBdev3", 00:16:18.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.476 "is_configured": false, 00:16:18.476 "data_offset": 0, 00:16:18.476 "data_size": 0 00:16:18.476 } 00:16:18.476 ] 00:16:18.476 }' 00:16:18.476 00:58:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.476 00:58:52 -- common/autotest_common.sh@10 -- # set +x 00:16:19.043 00:58:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:19.302 [2024-11-18 00:58:53.647881] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.302 [2024-11-18 00:58:53.647931] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:19.302 00:58:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:19.561 [2024-11-18 00:58:53.839976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.561 [2024-11-18 00:58:53.840069] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.561 [2024-11-18 00:58:53.840080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.561 [2024-11-18 00:58:53.840105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.561 [2024-11-18 00:58:53.840112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.561 [2024-11-18 00:58:53.840138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.561 00:58:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.821 [2024-11-18 00:58:54.048145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.821 BaseBdev1 00:16:19.821 00:58:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:19.821 00:58:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:19.821 00:58:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:19.821 00:58:54 -- common/autotest_common.sh@899 -- # local i 00:16:19.821 00:58:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:19.821 00:58:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:19.821 00:58:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.080 00:58:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.080 [ 00:16:20.080 { 00:16:20.080 "name": "BaseBdev1", 00:16:20.080 "aliases": [ 00:16:20.080 "240179b9-f55b-4db2-ac8a-b69d08e86dd3" 00:16:20.080 ], 00:16:20.080 "product_name": "Malloc disk", 00:16:20.080 "block_size": 512, 00:16:20.080 "num_blocks": 65536, 00:16:20.080 "uuid": "240179b9-f55b-4db2-ac8a-b69d08e86dd3", 00:16:20.080 "assigned_rate_limits": { 00:16:20.080 "rw_ios_per_sec": 0, 00:16:20.080 "rw_mbytes_per_sec": 0, 00:16:20.080 "r_mbytes_per_sec": 0, 00:16:20.080 "w_mbytes_per_sec": 0 00:16:20.080 }, 00:16:20.080 "claimed": true, 00:16:20.080 "claim_type": "exclusive_write", 00:16:20.080 "zoned": false, 00:16:20.080 "supported_io_types": { 00:16:20.080 "read": true, 00:16:20.080 "write": true, 00:16:20.080 "unmap": true, 00:16:20.080 "write_zeroes": true, 00:16:20.080 "flush": true, 00:16:20.080 "reset": true, 00:16:20.080 "compare": false, 00:16:20.080 "compare_and_write": false, 00:16:20.080 "abort": true, 00:16:20.080 "nvme_admin": false, 00:16:20.080 "nvme_io": false 00:16:20.080 }, 00:16:20.080 "memory_domains": [ 00:16:20.080 { 00:16:20.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.080 "dma_device_type": 2 00:16:20.080 } 00:16:20.080 ], 00:16:20.080 "driver_specific": {} 00:16:20.080 } 00:16:20.080 ] 00:16:20.080 00:58:54 -- common/autotest_common.sh@905 -- # return 0 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.080 00:58:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.338 00:58:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.338 "name": "Existed_Raid", 00:16:20.338 "uuid": "a4efa291-96f2-43a6-b9e6-ce7f71767875", 00:16:20.338 "strip_size_kb": 64, 00:16:20.338 "state": "configuring", 00:16:20.338 "raid_level": "raid0", 00:16:20.338 "superblock": true, 00:16:20.338 "num_base_bdevs": 3, 00:16:20.338 "num_base_bdevs_discovered": 1, 00:16:20.338 "num_base_bdevs_operational": 3, 00:16:20.338 "base_bdevs_list": [ 00:16:20.338 { 00:16:20.338 "name": "BaseBdev1", 00:16:20.338 "uuid": "240179b9-f55b-4db2-ac8a-b69d08e86dd3", 00:16:20.338 "is_configured": true, 00:16:20.338 "data_offset": 2048, 00:16:20.338 "data_size": 63488 00:16:20.338 }, 00:16:20.338 { 00:16:20.338 "name": "BaseBdev2", 00:16:20.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.338 "is_configured": false, 00:16:20.338 "data_offset": 0, 00:16:20.338 "data_size": 0 00:16:20.338 }, 00:16:20.338 { 00:16:20.338 "name": "BaseBdev3", 00:16:20.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.338 "is_configured": false, 00:16:20.338 "data_offset": 0, 00:16:20.338 "data_size": 0 00:16:20.338 } 00:16:20.338 ] 00:16:20.338 }' 00:16:20.338 00:58:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.338 00:58:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.905 00:58:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:21.163 [2024-11-18 00:58:55.356423] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.163 [2024-11-18 00:58:55.356524] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:21.163 00:58:55 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:21.163 00:58:55 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:21.422 00:58:55 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:21.422 BaseBdev1 00:16:21.422 00:58:55 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:21.422 00:58:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:21.422 00:58:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:21.422 00:58:55 -- common/autotest_common.sh@899 -- # local i 00:16:21.422 00:58:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:21.422 00:58:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:21.422 00:58:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.680 00:58:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.940 [ 00:16:21.940 { 00:16:21.940 "name": "BaseBdev1", 00:16:21.940 "aliases": [ 00:16:21.940 "5aaf9446-a382-4c1c-9cca-d87415df0b09" 00:16:21.940 ], 00:16:21.940 "product_name": "Malloc disk", 00:16:21.940 "block_size": 512, 00:16:21.940 "num_blocks": 65536, 00:16:21.940 "uuid": "5aaf9446-a382-4c1c-9cca-d87415df0b09", 00:16:21.940 "assigned_rate_limits": { 00:16:21.940 "rw_ios_per_sec": 0, 00:16:21.940 "rw_mbytes_per_sec": 0, 00:16:21.940 "r_mbytes_per_sec": 0, 00:16:21.940 "w_mbytes_per_sec": 0 00:16:21.940 }, 00:16:21.940 "claimed": false, 00:16:21.940 "zoned": false, 00:16:21.940 "supported_io_types": { 00:16:21.940 "read": true, 00:16:21.940 "write": true, 00:16:21.940 "unmap": true, 00:16:21.940 "write_zeroes": true, 00:16:21.940 "flush": true, 00:16:21.940 "reset": true, 00:16:21.940 "compare": false, 00:16:21.940 "compare_and_write": false, 00:16:21.940 "abort": true, 00:16:21.940 "nvme_admin": false, 00:16:21.940 "nvme_io": false 00:16:21.940 }, 00:16:21.940 "memory_domains": [ 00:16:21.940 { 00:16:21.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.940 "dma_device_type": 2 00:16:21.940 } 00:16:21.940 ], 00:16:21.940 "driver_specific": {} 00:16:21.940 } 00:16:21.940 ] 00:16:21.940 00:58:56 -- common/autotest_common.sh@905 -- # return 0 00:16:21.940 00:58:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:22.200 [2024-11-18 00:58:56.446044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.200 [2024-11-18 00:58:56.448535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.200 [2024-11-18 00:58:56.448624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.200 [2024-11-18 00:58:56.448634] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.200 [2024-11-18 00:58:56.448662] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.201 00:58:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.464 00:58:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.464 "name": "Existed_Raid", 00:16:22.464 "uuid": "0893b0a7-0019-4956-9bc5-8d444188152b", 00:16:22.464 "strip_size_kb": 64, 00:16:22.464 "state": "configuring", 00:16:22.464 "raid_level": "raid0", 00:16:22.464 "superblock": true, 00:16:22.464 "num_base_bdevs": 3, 00:16:22.464 "num_base_bdevs_discovered": 1, 00:16:22.464 "num_base_bdevs_operational": 3, 00:16:22.464 "base_bdevs_list": [ 00:16:22.464 { 00:16:22.464 "name": "BaseBdev1", 00:16:22.464 "uuid": "5aaf9446-a382-4c1c-9cca-d87415df0b09", 00:16:22.464 "is_configured": true, 00:16:22.464 "data_offset": 2048, 00:16:22.464 "data_size": 63488 00:16:22.464 }, 00:16:22.464 { 00:16:22.464 "name": "BaseBdev2", 00:16:22.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.464 "is_configured": false, 00:16:22.464 "data_offset": 0, 00:16:22.464 "data_size": 0 00:16:22.464 }, 00:16:22.464 { 00:16:22.464 "name": "BaseBdev3", 00:16:22.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.464 "is_configured": false, 00:16:22.464 "data_offset": 0, 00:16:22.464 "data_size": 0 00:16:22.464 } 00:16:22.464 ] 00:16:22.464 }' 00:16:22.464 00:58:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.464 00:58:56 -- common/autotest_common.sh@10 -- # set +x 00:16:23.031 00:58:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.291 [2024-11-18 00:58:57.532097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.291 BaseBdev2 00:16:23.291 00:58:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:23.291 00:58:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:23.291 00:58:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:23.291 00:58:57 -- common/autotest_common.sh@899 -- # local i 00:16:23.291 00:58:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:23.291 00:58:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:23.291 00:58:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.551 00:58:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.551 [ 00:16:23.551 { 00:16:23.551 "name": "BaseBdev2", 00:16:23.551 "aliases": [ 00:16:23.551 "fb0895ed-0708-49ec-9cd7-ba2acccb7e71" 00:16:23.551 ], 00:16:23.551 "product_name": "Malloc disk", 00:16:23.551 "block_size": 512, 00:16:23.551 "num_blocks": 65536, 00:16:23.551 "uuid": "fb0895ed-0708-49ec-9cd7-ba2acccb7e71", 00:16:23.551 "assigned_rate_limits": { 00:16:23.551 "rw_ios_per_sec": 0, 00:16:23.551 "rw_mbytes_per_sec": 0, 00:16:23.551 "r_mbytes_per_sec": 0, 00:16:23.551 "w_mbytes_per_sec": 0 00:16:23.551 }, 00:16:23.551 "claimed": true, 00:16:23.551 "claim_type": "exclusive_write", 00:16:23.551 "zoned": false, 00:16:23.551 "supported_io_types": { 00:16:23.551 "read": true, 00:16:23.551 "write": true, 00:16:23.551 "unmap": true, 00:16:23.551 "write_zeroes": true, 00:16:23.551 "flush": true, 00:16:23.551 "reset": true, 00:16:23.551 "compare": false, 00:16:23.551 "compare_and_write": false, 00:16:23.551 "abort": true, 00:16:23.551 "nvme_admin": false, 00:16:23.551 "nvme_io": false 00:16:23.551 }, 00:16:23.551 "memory_domains": [ 00:16:23.551 { 00:16:23.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.551 "dma_device_type": 2 00:16:23.551 } 00:16:23.551 ], 00:16:23.551 "driver_specific": {} 00:16:23.552 } 00:16:23.552 ] 00:16:23.552 00:58:57 -- common/autotest_common.sh@905 -- # return 0 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.552 00:58:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.810 00:58:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.810 "name": "Existed_Raid", 00:16:23.810 "uuid": "0893b0a7-0019-4956-9bc5-8d444188152b", 00:16:23.810 "strip_size_kb": 64, 00:16:23.810 "state": "configuring", 00:16:23.810 "raid_level": "raid0", 00:16:23.810 "superblock": true, 00:16:23.810 "num_base_bdevs": 3, 00:16:23.810 "num_base_bdevs_discovered": 2, 00:16:23.810 "num_base_bdevs_operational": 3, 00:16:23.810 "base_bdevs_list": [ 00:16:23.810 { 00:16:23.810 "name": "BaseBdev1", 00:16:23.810 "uuid": "5aaf9446-a382-4c1c-9cca-d87415df0b09", 00:16:23.810 "is_configured": true, 00:16:23.810 "data_offset": 2048, 00:16:23.810 "data_size": 63488 00:16:23.810 }, 00:16:23.810 { 00:16:23.810 "name": "BaseBdev2", 00:16:23.810 "uuid": "fb0895ed-0708-49ec-9cd7-ba2acccb7e71", 00:16:23.810 "is_configured": true, 00:16:23.810 "data_offset": 2048, 00:16:23.810 "data_size": 63488 00:16:23.810 }, 00:16:23.810 { 00:16:23.810 "name": "BaseBdev3", 00:16:23.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.810 "is_configured": false, 00:16:23.810 "data_offset": 0, 00:16:23.810 "data_size": 0 00:16:23.810 } 00:16:23.810 ] 00:16:23.810 }' 00:16:23.810 00:58:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.810 00:58:58 -- common/autotest_common.sh@10 -- # set +x 00:16:24.377 00:58:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:24.635 [2024-11-18 00:58:58.841952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.635 [2024-11-18 00:58:58.842216] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:24.635 [2024-11-18 00:58:58.842229] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.635 [2024-11-18 00:58:58.842372] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:24.635 [2024-11-18 00:58:58.842790] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:24.635 [2024-11-18 00:58:58.842801] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:24.635 [2024-11-18 00:58:58.842941] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.635 BaseBdev3 00:16:24.635 00:58:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:24.635 00:58:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:24.635 00:58:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:24.635 00:58:58 -- common/autotest_common.sh@899 -- # local i 00:16:24.635 00:58:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:24.635 00:58:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:24.635 00:58:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.894 00:58:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.153 [ 00:16:25.153 { 00:16:25.153 "name": "BaseBdev3", 00:16:25.153 "aliases": [ 00:16:25.153 "999db594-51f5-48fe-ac1f-79ed7c9a81ce" 00:16:25.153 ], 00:16:25.153 "product_name": "Malloc disk", 00:16:25.153 "block_size": 512, 00:16:25.153 "num_blocks": 65536, 00:16:25.153 "uuid": "999db594-51f5-48fe-ac1f-79ed7c9a81ce", 00:16:25.153 "assigned_rate_limits": { 00:16:25.153 "rw_ios_per_sec": 0, 00:16:25.153 "rw_mbytes_per_sec": 0, 00:16:25.153 "r_mbytes_per_sec": 0, 00:16:25.153 "w_mbytes_per_sec": 0 00:16:25.153 }, 00:16:25.153 "claimed": true, 00:16:25.153 "claim_type": "exclusive_write", 00:16:25.153 "zoned": false, 00:16:25.153 "supported_io_types": { 00:16:25.153 "read": true, 00:16:25.153 "write": true, 00:16:25.153 "unmap": true, 00:16:25.153 "write_zeroes": true, 00:16:25.153 "flush": true, 00:16:25.153 "reset": true, 00:16:25.153 "compare": false, 00:16:25.153 "compare_and_write": false, 00:16:25.153 "abort": true, 00:16:25.153 "nvme_admin": false, 00:16:25.153 "nvme_io": false 00:16:25.153 }, 00:16:25.153 "memory_domains": [ 00:16:25.153 { 00:16:25.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.153 "dma_device_type": 2 00:16:25.153 } 00:16:25.153 ], 00:16:25.153 "driver_specific": {} 00:16:25.153 } 00:16:25.153 ] 00:16:25.153 00:58:59 -- common/autotest_common.sh@905 -- # return 0 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.153 00:58:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.411 00:58:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.411 "name": "Existed_Raid", 00:16:25.411 "uuid": "0893b0a7-0019-4956-9bc5-8d444188152b", 00:16:25.411 "strip_size_kb": 64, 00:16:25.411 "state": "online", 00:16:25.411 "raid_level": "raid0", 00:16:25.411 "superblock": true, 00:16:25.411 "num_base_bdevs": 3, 00:16:25.411 "num_base_bdevs_discovered": 3, 00:16:25.411 "num_base_bdevs_operational": 3, 00:16:25.411 "base_bdevs_list": [ 00:16:25.411 { 00:16:25.411 "name": "BaseBdev1", 00:16:25.411 "uuid": "5aaf9446-a382-4c1c-9cca-d87415df0b09", 00:16:25.411 "is_configured": true, 00:16:25.411 "data_offset": 2048, 00:16:25.411 "data_size": 63488 00:16:25.411 }, 00:16:25.411 { 00:16:25.411 "name": "BaseBdev2", 00:16:25.411 "uuid": "fb0895ed-0708-49ec-9cd7-ba2acccb7e71", 00:16:25.411 "is_configured": true, 00:16:25.411 "data_offset": 2048, 00:16:25.411 "data_size": 63488 00:16:25.411 }, 00:16:25.411 { 00:16:25.411 "name": "BaseBdev3", 00:16:25.411 "uuid": "999db594-51f5-48fe-ac1f-79ed7c9a81ce", 00:16:25.411 "is_configured": true, 00:16:25.411 "data_offset": 2048, 00:16:25.411 "data_size": 63488 00:16:25.411 } 00:16:25.411 ] 00:16:25.411 }' 00:16:25.411 00:58:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.411 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:16:25.977 00:59:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:25.977 [2024-11-18 00:59:00.370409] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.977 [2024-11-18 00:59:00.370461] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.977 [2024-11-18 00:59:00.370530] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.235 00:59:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.494 00:59:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.494 "name": "Existed_Raid", 00:16:26.494 "uuid": "0893b0a7-0019-4956-9bc5-8d444188152b", 00:16:26.494 "strip_size_kb": 64, 00:16:26.494 "state": "offline", 00:16:26.494 "raid_level": "raid0", 00:16:26.494 "superblock": true, 00:16:26.494 "num_base_bdevs": 3, 00:16:26.494 "num_base_bdevs_discovered": 2, 00:16:26.494 "num_base_bdevs_operational": 2, 00:16:26.494 "base_bdevs_list": [ 00:16:26.494 { 00:16:26.494 "name": null, 00:16:26.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.494 "is_configured": false, 00:16:26.494 "data_offset": 2048, 00:16:26.494 "data_size": 63488 00:16:26.494 }, 00:16:26.494 { 00:16:26.494 "name": "BaseBdev2", 00:16:26.494 "uuid": "fb0895ed-0708-49ec-9cd7-ba2acccb7e71", 00:16:26.494 "is_configured": true, 00:16:26.494 "data_offset": 2048, 00:16:26.494 "data_size": 63488 00:16:26.494 }, 00:16:26.494 { 00:16:26.494 "name": "BaseBdev3", 00:16:26.494 "uuid": "999db594-51f5-48fe-ac1f-79ed7c9a81ce", 00:16:26.494 "is_configured": true, 00:16:26.494 "data_offset": 2048, 00:16:26.494 "data_size": 63488 00:16:26.494 } 00:16:26.494 ] 00:16:26.494 }' 00:16:26.494 00:59:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.494 00:59:00 -- common/autotest_common.sh@10 -- # set +x 00:16:27.061 00:59:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:27.061 00:59:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:27.061 00:59:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.061 00:59:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:27.061 00:59:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:27.061 00:59:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.061 00:59:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:27.320 [2024-11-18 00:59:01.594350] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.320 00:59:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:27.320 00:59:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:27.320 00:59:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.320 00:59:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:27.579 00:59:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:27.579 00:59:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.579 00:59:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:27.837 [2024-11-18 00:59:02.050918] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:27.837 [2024-11-18 00:59:02.051027] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:16:27.837 00:59:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:27.837 00:59:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:27.837 00:59:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.837 00:59:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.096 00:59:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:28.096 00:59:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:28.096 00:59:02 -- bdev/bdev_raid.sh@287 -- # killprocess 126129 00:16:28.096 00:59:02 -- common/autotest_common.sh@936 -- # '[' -z 126129 ']' 00:16:28.096 00:59:02 -- common/autotest_common.sh@940 -- # kill -0 126129 00:16:28.096 00:59:02 -- common/autotest_common.sh@941 -- # uname 00:16:28.096 00:59:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.096 00:59:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126129 00:16:28.096 00:59:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:28.096 00:59:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:28.096 00:59:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126129' 00:16:28.096 killing process with pid 126129 00:16:28.096 00:59:02 -- common/autotest_common.sh@955 -- # kill 126129 00:16:28.096 [2024-11-18 00:59:02.395448] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.096 00:59:02 -- common/autotest_common.sh@960 -- # wait 126129 00:16:28.096 [2024-11-18 00:59:02.395567] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:28.663 ************************************ 00:16:28.663 END TEST raid_state_function_test_sb 00:16:28.663 ************************************ 00:16:28.663 00:16:28.663 real 0m11.475s 00:16:28.663 user 0m19.912s 00:16:28.663 sys 0m2.281s 00:16:28.663 00:59:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:28.663 00:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:28.663 00:59:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:28.663 00:59:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.663 00:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:28.663 ************************************ 00:16:28.663 START TEST raid_superblock_test 00:16:28.663 ************************************ 00:16:28.663 00:59:02 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:28.663 00:59:02 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:28.664 00:59:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:28.664 00:59:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:28.664 00:59:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=126504 00:16:28.664 00:59:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126504 /var/tmp/spdk-raid.sock 00:16:28.664 00:59:02 -- common/autotest_common.sh@829 -- # '[' -z 126504 ']' 00:16:28.664 00:59:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.664 00:59:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.664 00:59:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:28.664 00:59:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.664 00:59:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.664 00:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:28.664 [2024-11-18 00:59:02.937516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:28.664 [2024-11-18 00:59:02.937799] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126504 ] 00:16:28.922 [2024-11-18 00:59:03.098394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.922 [2024-11-18 00:59:03.188536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.922 [2024-11-18 00:59:03.272905] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.858 00:59:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.858 00:59:03 -- common/autotest_common.sh@862 -- # return 0 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.858 00:59:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:29.858 malloc1 00:16:29.858 00:59:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:30.115 [2024-11-18 00:59:04.435335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:30.115 [2024-11-18 00:59:04.435489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.115 [2024-11-18 00:59:04.435534] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:16:30.115 [2024-11-18 00:59:04.435596] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.115 [2024-11-18 00:59:04.438638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.115 [2024-11-18 00:59:04.438712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:30.115 pt1 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.115 00:59:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:30.422 malloc2 00:16:30.422 00:59:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.680 [2024-11-18 00:59:04.843264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.680 [2024-11-18 00:59:04.843374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.680 [2024-11-18 00:59:04.843417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:30.680 [2024-11-18 00:59:04.843466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.680 [2024-11-18 00:59:04.846203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.680 [2024-11-18 00:59:04.846266] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.680 pt2 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.680 00:59:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:30.680 malloc3 00:16:30.938 00:59:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.938 [2024-11-18 00:59:05.270289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.938 [2024-11-18 00:59:05.270408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.938 [2024-11-18 00:59:05.270455] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.938 [2024-11-18 00:59:05.270502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.938 [2024-11-18 00:59:05.273316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.938 [2024-11-18 00:59:05.273382] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.938 pt3 00:16:30.938 00:59:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:30.938 00:59:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:30.938 00:59:05 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:31.195 [2024-11-18 00:59:05.547004] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.195 [2024-11-18 00:59:05.549552] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.195 [2024-11-18 00:59:05.549632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:31.195 [2024-11-18 00:59:05.549840] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:31.195 [2024-11-18 00:59:05.549851] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:31.195 [2024-11-18 00:59:05.550056] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:16:31.195 [2024-11-18 00:59:05.550545] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:31.195 [2024-11-18 00:59:05.550731] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:16:31.195 [2024-11-18 00:59:05.551086] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.195 00:59:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.452 00:59:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.452 "name": "raid_bdev1", 00:16:31.452 "uuid": "135960b3-f7a5-4fb4-91f7-a16516beb854", 00:16:31.452 "strip_size_kb": 64, 00:16:31.452 "state": "online", 00:16:31.452 "raid_level": "raid0", 00:16:31.452 "superblock": true, 00:16:31.453 "num_base_bdevs": 3, 00:16:31.453 "num_base_bdevs_discovered": 3, 00:16:31.453 "num_base_bdevs_operational": 3, 00:16:31.453 "base_bdevs_list": [ 00:16:31.453 { 00:16:31.453 "name": "pt1", 00:16:31.453 "uuid": "c718706f-fca1-5ed2-b716-5c9c39ee0224", 00:16:31.453 "is_configured": true, 00:16:31.453 "data_offset": 2048, 00:16:31.453 "data_size": 63488 00:16:31.453 }, 00:16:31.453 { 00:16:31.453 "name": "pt2", 00:16:31.453 "uuid": "7dc4aa78-32cd-539f-853f-f33f45b18727", 00:16:31.453 "is_configured": true, 00:16:31.453 "data_offset": 2048, 00:16:31.453 "data_size": 63488 00:16:31.453 }, 00:16:31.453 { 00:16:31.453 "name": "pt3", 00:16:31.453 "uuid": "d1081847-4e2e-581d-9b75-cc9112260feb", 00:16:31.453 "is_configured": true, 00:16:31.453 "data_offset": 2048, 00:16:31.453 "data_size": 63488 00:16:31.453 } 00:16:31.453 ] 00:16:31.453 }' 00:16:31.453 00:59:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.453 00:59:05 -- common/autotest_common.sh@10 -- # set +x 00:16:32.019 00:59:06 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:32.019 00:59:06 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:32.277 [2024-11-18 00:59:06.567466] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.277 00:59:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=135960b3-f7a5-4fb4-91f7-a16516beb854 00:16:32.277 00:59:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 135960b3-f7a5-4fb4-91f7-a16516beb854 ']' 00:16:32.277 00:59:06 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:32.535 [2024-11-18 00:59:06.827286] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.535 [2024-11-18 00:59:06.827589] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.535 [2024-11-18 00:59:06.827834] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.535 [2024-11-18 00:59:06.828033] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.535 [2024-11-18 00:59:06.828124] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:16:32.535 00:59:06 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.535 00:59:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:32.793 00:59:07 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:32.793 00:59:07 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:32.793 00:59:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.793 00:59:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:33.052 00:59:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:33.052 00:59:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:33.310 00:59:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:33.310 00:59:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:33.569 00:59:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:33.569 00:59:07 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:33.569 00:59:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:33.569 00:59:07 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:33.569 00:59:07 -- common/autotest_common.sh@650 -- # local es=0 00:16:33.569 00:59:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:33.569 00:59:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.569 00:59:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.569 00:59:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.569 00:59:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.569 00:59:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.569 00:59:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.569 00:59:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.569 00:59:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:33.569 00:59:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:33.827 [2024-11-18 00:59:08.183499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:33.827 [2024-11-18 00:59:08.186402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:33.827 [2024-11-18 00:59:08.186650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:33.827 [2024-11-18 00:59:08.186747] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:33.827 [2024-11-18 00:59:08.186957] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:33.828 [2024-11-18 00:59:08.187101] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:33.828 [2024-11-18 00:59:08.187201] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.828 [2024-11-18 00:59:08.187242] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:16:33.828 request: 00:16:33.828 { 00:16:33.828 "name": "raid_bdev1", 00:16:33.828 "raid_level": "raid0", 00:16:33.828 "base_bdevs": [ 00:16:33.828 "malloc1", 00:16:33.828 "malloc2", 00:16:33.828 "malloc3" 00:16:33.828 ], 00:16:33.828 "superblock": false, 00:16:33.828 "strip_size_kb": 64, 00:16:33.828 "method": "bdev_raid_create", 00:16:33.828 "req_id": 1 00:16:33.828 } 00:16:33.828 Got JSON-RPC error response 00:16:33.828 response: 00:16:33.828 { 00:16:33.828 "code": -17, 00:16:33.828 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:33.828 } 00:16:33.828 00:59:08 -- common/autotest_common.sh@653 -- # es=1 00:16:33.828 00:59:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.828 00:59:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.828 00:59:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.828 00:59:08 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.828 00:59:08 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:34.086 00:59:08 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:34.087 00:59:08 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:34.087 00:59:08 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.345 [2024-11-18 00:59:08.571652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.345 [2024-11-18 00:59:08.571836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.345 [2024-11-18 00:59:08.571924] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:34.345 [2024-11-18 00:59:08.572086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.345 [2024-11-18 00:59:08.575015] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.345 [2024-11-18 00:59:08.575189] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.345 [2024-11-18 00:59:08.575447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:34.345 [2024-11-18 00:59:08.575626] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.345 pt1 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.345 00:59:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.604 00:59:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.604 "name": "raid_bdev1", 00:16:34.604 "uuid": "135960b3-f7a5-4fb4-91f7-a16516beb854", 00:16:34.604 "strip_size_kb": 64, 00:16:34.604 "state": "configuring", 00:16:34.604 "raid_level": "raid0", 00:16:34.604 "superblock": true, 00:16:34.604 "num_base_bdevs": 3, 00:16:34.604 "num_base_bdevs_discovered": 1, 00:16:34.604 "num_base_bdevs_operational": 3, 00:16:34.604 "base_bdevs_list": [ 00:16:34.604 { 00:16:34.604 "name": "pt1", 00:16:34.604 "uuid": "c718706f-fca1-5ed2-b716-5c9c39ee0224", 00:16:34.604 "is_configured": true, 00:16:34.604 "data_offset": 2048, 00:16:34.604 "data_size": 63488 00:16:34.604 }, 00:16:34.604 { 00:16:34.604 "name": null, 00:16:34.604 "uuid": "7dc4aa78-32cd-539f-853f-f33f45b18727", 00:16:34.604 "is_configured": false, 00:16:34.604 "data_offset": 2048, 00:16:34.604 "data_size": 63488 00:16:34.604 }, 00:16:34.604 { 00:16:34.604 "name": null, 00:16:34.604 "uuid": "d1081847-4e2e-581d-9b75-cc9112260feb", 00:16:34.604 "is_configured": false, 00:16:34.604 "data_offset": 2048, 00:16:34.604 "data_size": 63488 00:16:34.604 } 00:16:34.604 ] 00:16:34.604 }' 00:16:34.604 00:59:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.604 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:16:35.171 00:59:09 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:35.171 00:59:09 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.171 [2024-11-18 00:59:09.560154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.171 [2024-11-18 00:59:09.560559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.171 [2024-11-18 00:59:09.560658] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:35.171 [2024-11-18 00:59:09.560800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.171 [2024-11-18 00:59:09.561348] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.171 [2024-11-18 00:59:09.561507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.171 [2024-11-18 00:59:09.561711] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:35.171 [2024-11-18 00:59:09.561816] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.171 pt2 00:16:35.430 00:59:09 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:35.689 [2024-11-18 00:59:09.844258] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.689 00:59:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.947 00:59:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.947 "name": "raid_bdev1", 00:16:35.947 "uuid": "135960b3-f7a5-4fb4-91f7-a16516beb854", 00:16:35.947 "strip_size_kb": 64, 00:16:35.947 "state": "configuring", 00:16:35.947 "raid_level": "raid0", 00:16:35.947 "superblock": true, 00:16:35.947 "num_base_bdevs": 3, 00:16:35.947 "num_base_bdevs_discovered": 1, 00:16:35.947 "num_base_bdevs_operational": 3, 00:16:35.947 "base_bdevs_list": [ 00:16:35.947 { 00:16:35.947 "name": "pt1", 00:16:35.947 "uuid": "c718706f-fca1-5ed2-b716-5c9c39ee0224", 00:16:35.947 "is_configured": true, 00:16:35.947 "data_offset": 2048, 00:16:35.947 "data_size": 63488 00:16:35.947 }, 00:16:35.947 { 00:16:35.947 "name": null, 00:16:35.947 "uuid": "7dc4aa78-32cd-539f-853f-f33f45b18727", 00:16:35.947 "is_configured": false, 00:16:35.947 "data_offset": 2048, 00:16:35.947 "data_size": 63488 00:16:35.947 }, 00:16:35.947 { 00:16:35.947 "name": null, 00:16:35.947 "uuid": "d1081847-4e2e-581d-9b75-cc9112260feb", 00:16:35.947 "is_configured": false, 00:16:35.947 "data_offset": 2048, 00:16:35.947 "data_size": 63488 00:16:35.947 } 00:16:35.947 ] 00:16:35.947 }' 00:16:35.947 00:59:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.947 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:16:36.514 00:59:10 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:36.514 00:59:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:36.514 00:59:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.774 [2024-11-18 00:59:10.932417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.774 [2024-11-18 00:59:10.932806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.774 [2024-11-18 00:59:10.932887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:36.774 [2024-11-18 00:59:10.933009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.774 [2024-11-18 00:59:10.933560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.774 [2024-11-18 00:59:10.933717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.774 [2024-11-18 00:59:10.933920] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:36.774 [2024-11-18 00:59:10.934022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.774 pt2 00:16:36.774 00:59:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:36.774 00:59:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:36.774 00:59:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:36.774 [2024-11-18 00:59:11.120503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:36.774 [2024-11-18 00:59:11.120871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.774 [2024-11-18 00:59:11.120944] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:36.774 [2024-11-18 00:59:11.121052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.774 [2024-11-18 00:59:11.121567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.774 [2024-11-18 00:59:11.121711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:36.774 [2024-11-18 00:59:11.121899] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:36.774 [2024-11-18 00:59:11.122040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:36.774 [2024-11-18 00:59:11.122227] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:36.774 [2024-11-18 00:59:11.122409] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:36.774 [2024-11-18 00:59:11.122583] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:36.774 [2024-11-18 00:59:11.123101] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:36.774 [2024-11-18 00:59:11.123205] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:36.774 [2024-11-18 00:59:11.123401] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.774 pt3 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.774 00:59:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.034 00:59:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.034 "name": "raid_bdev1", 00:16:37.034 "uuid": "135960b3-f7a5-4fb4-91f7-a16516beb854", 00:16:37.034 "strip_size_kb": 64, 00:16:37.034 "state": "online", 00:16:37.034 "raid_level": "raid0", 00:16:37.034 "superblock": true, 00:16:37.034 "num_base_bdevs": 3, 00:16:37.034 "num_base_bdevs_discovered": 3, 00:16:37.034 "num_base_bdevs_operational": 3, 00:16:37.034 "base_bdevs_list": [ 00:16:37.034 { 00:16:37.034 "name": "pt1", 00:16:37.034 "uuid": "c718706f-fca1-5ed2-b716-5c9c39ee0224", 00:16:37.034 "is_configured": true, 00:16:37.034 "data_offset": 2048, 00:16:37.034 "data_size": 63488 00:16:37.034 }, 00:16:37.034 { 00:16:37.034 "name": "pt2", 00:16:37.034 "uuid": "7dc4aa78-32cd-539f-853f-f33f45b18727", 00:16:37.034 "is_configured": true, 00:16:37.034 "data_offset": 2048, 00:16:37.034 "data_size": 63488 00:16:37.034 }, 00:16:37.034 { 00:16:37.034 "name": "pt3", 00:16:37.034 "uuid": "d1081847-4e2e-581d-9b75-cc9112260feb", 00:16:37.034 "is_configured": true, 00:16:37.034 "data_offset": 2048, 00:16:37.034 "data_size": 63488 00:16:37.034 } 00:16:37.034 ] 00:16:37.034 }' 00:16:37.034 00:59:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.034 00:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:37.602 00:59:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:37.602 00:59:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:37.861 [2024-11-18 00:59:12.136909] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.861 00:59:12 -- bdev/bdev_raid.sh@430 -- # '[' 135960b3-f7a5-4fb4-91f7-a16516beb854 '!=' 135960b3-f7a5-4fb4-91f7-a16516beb854 ']' 00:16:37.861 00:59:12 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:37.861 00:59:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:37.861 00:59:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:37.861 00:59:12 -- bdev/bdev_raid.sh@511 -- # killprocess 126504 00:16:37.861 00:59:12 -- common/autotest_common.sh@936 -- # '[' -z 126504 ']' 00:16:37.861 00:59:12 -- common/autotest_common.sh@940 -- # kill -0 126504 00:16:37.861 00:59:12 -- common/autotest_common.sh@941 -- # uname 00:16:37.861 00:59:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.861 00:59:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126504 00:16:37.861 00:59:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.861 00:59:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.861 00:59:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126504' 00:16:37.861 killing process with pid 126504 00:16:37.861 00:59:12 -- common/autotest_common.sh@955 -- # kill 126504 00:16:37.861 00:59:12 -- common/autotest_common.sh@960 -- # wait 126504 00:16:37.861 [2024-11-18 00:59:12.193964] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.861 [2024-11-18 00:59:12.194061] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.861 [2024-11-18 00:59:12.194134] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.861 [2024-11-18 00:59:12.194144] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:37.861 [2024-11-18 00:59:12.257217] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.429 ************************************ 00:16:38.429 END TEST raid_superblock_test 00:16:38.429 ************************************ 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:38.429 00:16:38.429 real 0m9.792s 00:16:38.429 user 0m16.883s 00:16:38.429 sys 0m1.887s 00:16:38.429 00:59:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:38.429 00:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:38.429 00:59:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:38.429 00:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.429 00:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:38.429 ************************************ 00:16:38.429 START TEST raid_state_function_test 00:16:38.429 ************************************ 00:16:38.429 00:59:12 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=126803 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126803' 00:16:38.429 Process raid pid: 126803 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126803 /var/tmp/spdk-raid.sock 00:16:38.429 00:59:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:38.429 00:59:12 -- common/autotest_common.sh@829 -- # '[' -z 126803 ']' 00:16:38.429 00:59:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:38.429 00:59:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:38.429 00:59:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:38.429 00:59:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.429 00:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:38.429 [2024-11-18 00:59:12.822148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:38.429 [2024-11-18 00:59:12.822733] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.688 [2024-11-18 00:59:12.975808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.688 [2024-11-18 00:59:13.062772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.947 [2024-11-18 00:59:13.143337] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.514 00:59:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.514 00:59:13 -- common/autotest_common.sh@862 -- # return 0 00:16:39.514 00:59:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:39.773 [2024-11-18 00:59:13.937770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.773 [2024-11-18 00:59:13.938144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.773 [2024-11-18 00:59:13.938239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.773 [2024-11-18 00:59:13.938294] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.773 [2024-11-18 00:59:13.938321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.773 [2024-11-18 00:59:13.938395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.773 00:59:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.032 00:59:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.032 "name": "Existed_Raid", 00:16:40.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.032 "strip_size_kb": 64, 00:16:40.032 "state": "configuring", 00:16:40.032 "raid_level": "concat", 00:16:40.032 "superblock": false, 00:16:40.032 "num_base_bdevs": 3, 00:16:40.032 "num_base_bdevs_discovered": 0, 00:16:40.032 "num_base_bdevs_operational": 3, 00:16:40.032 "base_bdevs_list": [ 00:16:40.032 { 00:16:40.032 "name": "BaseBdev1", 00:16:40.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.032 "is_configured": false, 00:16:40.032 "data_offset": 0, 00:16:40.032 "data_size": 0 00:16:40.032 }, 00:16:40.032 { 00:16:40.032 "name": "BaseBdev2", 00:16:40.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.032 "is_configured": false, 00:16:40.032 "data_offset": 0, 00:16:40.032 "data_size": 0 00:16:40.032 }, 00:16:40.032 { 00:16:40.032 "name": "BaseBdev3", 00:16:40.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.032 "is_configured": false, 00:16:40.032 "data_offset": 0, 00:16:40.032 "data_size": 0 00:16:40.032 } 00:16:40.032 ] 00:16:40.032 }' 00:16:40.032 00:59:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.032 00:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:40.599 00:59:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:40.858 [2024-11-18 00:59:15.065835] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.858 [2024-11-18 00:59:15.066165] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:40.858 00:59:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:40.858 [2024-11-18 00:59:15.249907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.858 [2024-11-18 00:59:15.250276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.858 [2024-11-18 00:59:15.250367] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.858 [2024-11-18 00:59:15.250428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.858 [2024-11-18 00:59:15.250454] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.858 [2024-11-18 00:59:15.250513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.117 00:59:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.376 [2024-11-18 00:59:15.538422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.376 BaseBdev1 00:16:41.376 00:59:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:41.376 00:59:15 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:41.376 00:59:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.376 00:59:15 -- common/autotest_common.sh@899 -- # local i 00:16:41.376 00:59:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.376 00:59:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.376 00:59:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.634 00:59:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.634 [ 00:16:41.634 { 00:16:41.634 "name": "BaseBdev1", 00:16:41.634 "aliases": [ 00:16:41.634 "e3c9406a-da4e-4970-a31e-3f3035c3c2db" 00:16:41.634 ], 00:16:41.634 "product_name": "Malloc disk", 00:16:41.634 "block_size": 512, 00:16:41.634 "num_blocks": 65536, 00:16:41.634 "uuid": "e3c9406a-da4e-4970-a31e-3f3035c3c2db", 00:16:41.634 "assigned_rate_limits": { 00:16:41.634 "rw_ios_per_sec": 0, 00:16:41.634 "rw_mbytes_per_sec": 0, 00:16:41.634 "r_mbytes_per_sec": 0, 00:16:41.634 "w_mbytes_per_sec": 0 00:16:41.634 }, 00:16:41.634 "claimed": true, 00:16:41.634 "claim_type": "exclusive_write", 00:16:41.634 "zoned": false, 00:16:41.634 "supported_io_types": { 00:16:41.634 "read": true, 00:16:41.634 "write": true, 00:16:41.634 "unmap": true, 00:16:41.634 "write_zeroes": true, 00:16:41.634 "flush": true, 00:16:41.634 "reset": true, 00:16:41.634 "compare": false, 00:16:41.634 "compare_and_write": false, 00:16:41.634 "abort": true, 00:16:41.634 "nvme_admin": false, 00:16:41.634 "nvme_io": false 00:16:41.634 }, 00:16:41.634 "memory_domains": [ 00:16:41.634 { 00:16:41.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.635 "dma_device_type": 2 00:16:41.635 } 00:16:41.635 ], 00:16:41.635 "driver_specific": {} 00:16:41.635 } 00:16:41.635 ] 00:16:41.635 00:59:16 -- common/autotest_common.sh@905 -- # return 0 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.635 00:59:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.893 00:59:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.893 "name": "Existed_Raid", 00:16:41.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.893 "strip_size_kb": 64, 00:16:41.893 "state": "configuring", 00:16:41.893 "raid_level": "concat", 00:16:41.893 "superblock": false, 00:16:41.893 "num_base_bdevs": 3, 00:16:41.893 "num_base_bdevs_discovered": 1, 00:16:41.893 "num_base_bdevs_operational": 3, 00:16:41.893 "base_bdevs_list": [ 00:16:41.893 { 00:16:41.893 "name": "BaseBdev1", 00:16:41.893 "uuid": "e3c9406a-da4e-4970-a31e-3f3035c3c2db", 00:16:41.893 "is_configured": true, 00:16:41.893 "data_offset": 0, 00:16:41.893 "data_size": 65536 00:16:41.893 }, 00:16:41.893 { 00:16:41.893 "name": "BaseBdev2", 00:16:41.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.893 "is_configured": false, 00:16:41.893 "data_offset": 0, 00:16:41.893 "data_size": 0 00:16:41.893 }, 00:16:41.893 { 00:16:41.893 "name": "BaseBdev3", 00:16:41.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.893 "is_configured": false, 00:16:41.893 "data_offset": 0, 00:16:41.893 "data_size": 0 00:16:41.893 } 00:16:41.893 ] 00:16:41.893 }' 00:16:41.893 00:59:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.893 00:59:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.460 00:59:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.719 [2024-11-18 00:59:17.011088] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.719 [2024-11-18 00:59:17.011447] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:42.719 00:59:17 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:42.719 00:59:17 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:42.978 [2024-11-18 00:59:17.195240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.978 [2024-11-18 00:59:17.198150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.978 [2024-11-18 00:59:17.198397] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.978 [2024-11-18 00:59:17.198496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.978 [2024-11-18 00:59:17.198587] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.978 00:59:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.237 00:59:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.237 "name": "Existed_Raid", 00:16:43.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.237 "strip_size_kb": 64, 00:16:43.237 "state": "configuring", 00:16:43.237 "raid_level": "concat", 00:16:43.237 "superblock": false, 00:16:43.237 "num_base_bdevs": 3, 00:16:43.237 "num_base_bdevs_discovered": 1, 00:16:43.237 "num_base_bdevs_operational": 3, 00:16:43.237 "base_bdevs_list": [ 00:16:43.237 { 00:16:43.237 "name": "BaseBdev1", 00:16:43.237 "uuid": "e3c9406a-da4e-4970-a31e-3f3035c3c2db", 00:16:43.237 "is_configured": true, 00:16:43.237 "data_offset": 0, 00:16:43.237 "data_size": 65536 00:16:43.237 }, 00:16:43.237 { 00:16:43.237 "name": "BaseBdev2", 00:16:43.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.237 "is_configured": false, 00:16:43.237 "data_offset": 0, 00:16:43.237 "data_size": 0 00:16:43.237 }, 00:16:43.237 { 00:16:43.237 "name": "BaseBdev3", 00:16:43.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.237 "is_configured": false, 00:16:43.237 "data_offset": 0, 00:16:43.237 "data_size": 0 00:16:43.237 } 00:16:43.237 ] 00:16:43.237 }' 00:16:43.237 00:59:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.237 00:59:17 -- common/autotest_common.sh@10 -- # set +x 00:16:43.805 00:59:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:44.064 [2024-11-18 00:59:18.376014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.064 BaseBdev2 00:16:44.064 00:59:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:44.064 00:59:18 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:44.064 00:59:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.064 00:59:18 -- common/autotest_common.sh@899 -- # local i 00:16:44.064 00:59:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.064 00:59:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.064 00:59:18 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.322 00:59:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.581 [ 00:16:44.581 { 00:16:44.581 "name": "BaseBdev2", 00:16:44.581 "aliases": [ 00:16:44.581 "6a0d5365-9d52-4b78-98ac-917c10855879" 00:16:44.581 ], 00:16:44.581 "product_name": "Malloc disk", 00:16:44.581 "block_size": 512, 00:16:44.581 "num_blocks": 65536, 00:16:44.581 "uuid": "6a0d5365-9d52-4b78-98ac-917c10855879", 00:16:44.581 "assigned_rate_limits": { 00:16:44.581 "rw_ios_per_sec": 0, 00:16:44.581 "rw_mbytes_per_sec": 0, 00:16:44.581 "r_mbytes_per_sec": 0, 00:16:44.581 "w_mbytes_per_sec": 0 00:16:44.581 }, 00:16:44.581 "claimed": true, 00:16:44.581 "claim_type": "exclusive_write", 00:16:44.581 "zoned": false, 00:16:44.581 "supported_io_types": { 00:16:44.581 "read": true, 00:16:44.581 "write": true, 00:16:44.581 "unmap": true, 00:16:44.581 "write_zeroes": true, 00:16:44.581 "flush": true, 00:16:44.581 "reset": true, 00:16:44.581 "compare": false, 00:16:44.581 "compare_and_write": false, 00:16:44.581 "abort": true, 00:16:44.581 "nvme_admin": false, 00:16:44.581 "nvme_io": false 00:16:44.581 }, 00:16:44.581 "memory_domains": [ 00:16:44.581 { 00:16:44.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.581 "dma_device_type": 2 00:16:44.581 } 00:16:44.581 ], 00:16:44.581 "driver_specific": {} 00:16:44.581 } 00:16:44.581 ] 00:16:44.581 00:59:18 -- common/autotest_common.sh@905 -- # return 0 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.581 00:59:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.840 00:59:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.840 "name": "Existed_Raid", 00:16:44.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.840 "strip_size_kb": 64, 00:16:44.840 "state": "configuring", 00:16:44.840 "raid_level": "concat", 00:16:44.840 "superblock": false, 00:16:44.840 "num_base_bdevs": 3, 00:16:44.840 "num_base_bdevs_discovered": 2, 00:16:44.840 "num_base_bdevs_operational": 3, 00:16:44.840 "base_bdevs_list": [ 00:16:44.840 { 00:16:44.840 "name": "BaseBdev1", 00:16:44.840 "uuid": "e3c9406a-da4e-4970-a31e-3f3035c3c2db", 00:16:44.840 "is_configured": true, 00:16:44.840 "data_offset": 0, 00:16:44.840 "data_size": 65536 00:16:44.840 }, 00:16:44.840 { 00:16:44.840 "name": "BaseBdev2", 00:16:44.840 "uuid": "6a0d5365-9d52-4b78-98ac-917c10855879", 00:16:44.840 "is_configured": true, 00:16:44.840 "data_offset": 0, 00:16:44.840 "data_size": 65536 00:16:44.840 }, 00:16:44.840 { 00:16:44.840 "name": "BaseBdev3", 00:16:44.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.840 "is_configured": false, 00:16:44.840 "data_offset": 0, 00:16:44.840 "data_size": 0 00:16:44.840 } 00:16:44.840 ] 00:16:44.840 }' 00:16:44.840 00:59:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.840 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:16:45.406 00:59:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:45.665 [2024-11-18 00:59:19.861911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.665 [2024-11-18 00:59:19.862310] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:45.665 [2024-11-18 00:59:19.862354] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:45.665 [2024-11-18 00:59:19.862613] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:45.665 [2024-11-18 00:59:19.863146] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:45.665 [2024-11-18 00:59:19.863262] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:45.665 [2024-11-18 00:59:19.863622] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.665 BaseBdev3 00:16:45.665 00:59:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:45.665 00:59:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:45.665 00:59:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:45.665 00:59:19 -- common/autotest_common.sh@899 -- # local i 00:16:45.665 00:59:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:45.665 00:59:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:45.665 00:59:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.923 00:59:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.182 [ 00:16:46.182 { 00:16:46.182 "name": "BaseBdev3", 00:16:46.182 "aliases": [ 00:16:46.182 "7fc9bde0-b064-4ddd-949f-098a104fc238" 00:16:46.182 ], 00:16:46.182 "product_name": "Malloc disk", 00:16:46.182 "block_size": 512, 00:16:46.182 "num_blocks": 65536, 00:16:46.182 "uuid": "7fc9bde0-b064-4ddd-949f-098a104fc238", 00:16:46.182 "assigned_rate_limits": { 00:16:46.182 "rw_ios_per_sec": 0, 00:16:46.182 "rw_mbytes_per_sec": 0, 00:16:46.182 "r_mbytes_per_sec": 0, 00:16:46.182 "w_mbytes_per_sec": 0 00:16:46.182 }, 00:16:46.182 "claimed": true, 00:16:46.182 "claim_type": "exclusive_write", 00:16:46.182 "zoned": false, 00:16:46.182 "supported_io_types": { 00:16:46.182 "read": true, 00:16:46.182 "write": true, 00:16:46.182 "unmap": true, 00:16:46.182 "write_zeroes": true, 00:16:46.182 "flush": true, 00:16:46.182 "reset": true, 00:16:46.182 "compare": false, 00:16:46.182 "compare_and_write": false, 00:16:46.182 "abort": true, 00:16:46.182 "nvme_admin": false, 00:16:46.182 "nvme_io": false 00:16:46.182 }, 00:16:46.182 "memory_domains": [ 00:16:46.182 { 00:16:46.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.182 "dma_device_type": 2 00:16:46.182 } 00:16:46.182 ], 00:16:46.182 "driver_specific": {} 00:16:46.182 } 00:16:46.182 ] 00:16:46.182 00:59:20 -- common/autotest_common.sh@905 -- # return 0 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.182 00:59:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.440 00:59:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.440 "name": "Existed_Raid", 00:16:46.440 "uuid": "186a85f4-7673-43f0-b3a8-ff0dca941870", 00:16:46.440 "strip_size_kb": 64, 00:16:46.440 "state": "online", 00:16:46.440 "raid_level": "concat", 00:16:46.440 "superblock": false, 00:16:46.440 "num_base_bdevs": 3, 00:16:46.440 "num_base_bdevs_discovered": 3, 00:16:46.440 "num_base_bdevs_operational": 3, 00:16:46.440 "base_bdevs_list": [ 00:16:46.440 { 00:16:46.440 "name": "BaseBdev1", 00:16:46.440 "uuid": "e3c9406a-da4e-4970-a31e-3f3035c3c2db", 00:16:46.440 "is_configured": true, 00:16:46.440 "data_offset": 0, 00:16:46.440 "data_size": 65536 00:16:46.440 }, 00:16:46.440 { 00:16:46.440 "name": "BaseBdev2", 00:16:46.440 "uuid": "6a0d5365-9d52-4b78-98ac-917c10855879", 00:16:46.440 "is_configured": true, 00:16:46.440 "data_offset": 0, 00:16:46.440 "data_size": 65536 00:16:46.440 }, 00:16:46.440 { 00:16:46.440 "name": "BaseBdev3", 00:16:46.440 "uuid": "7fc9bde0-b064-4ddd-949f-098a104fc238", 00:16:46.440 "is_configured": true, 00:16:46.440 "data_offset": 0, 00:16:46.440 "data_size": 65536 00:16:46.440 } 00:16:46.440 ] 00:16:46.440 }' 00:16:46.440 00:59:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.440 00:59:20 -- common/autotest_common.sh@10 -- # set +x 00:16:47.007 00:59:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:47.268 [2024-11-18 00:59:21.442540] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.268 [2024-11-18 00:59:21.442826] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.268 [2024-11-18 00:59:21.443042] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.268 00:59:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.555 00:59:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.555 "name": "Existed_Raid", 00:16:47.555 "uuid": "186a85f4-7673-43f0-b3a8-ff0dca941870", 00:16:47.555 "strip_size_kb": 64, 00:16:47.555 "state": "offline", 00:16:47.555 "raid_level": "concat", 00:16:47.555 "superblock": false, 00:16:47.555 "num_base_bdevs": 3, 00:16:47.555 "num_base_bdevs_discovered": 2, 00:16:47.555 "num_base_bdevs_operational": 2, 00:16:47.555 "base_bdevs_list": [ 00:16:47.555 { 00:16:47.555 "name": null, 00:16:47.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.555 "is_configured": false, 00:16:47.555 "data_offset": 0, 00:16:47.555 "data_size": 65536 00:16:47.555 }, 00:16:47.555 { 00:16:47.555 "name": "BaseBdev2", 00:16:47.555 "uuid": "6a0d5365-9d52-4b78-98ac-917c10855879", 00:16:47.555 "is_configured": true, 00:16:47.555 "data_offset": 0, 00:16:47.555 "data_size": 65536 00:16:47.555 }, 00:16:47.555 { 00:16:47.555 "name": "BaseBdev3", 00:16:47.555 "uuid": "7fc9bde0-b064-4ddd-949f-098a104fc238", 00:16:47.555 "is_configured": true, 00:16:47.555 "data_offset": 0, 00:16:47.555 "data_size": 65536 00:16:47.555 } 00:16:47.555 ] 00:16:47.555 }' 00:16:47.555 00:59:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.555 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.144 00:59:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:48.144 00:59:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.144 00:59:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.144 00:59:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:48.402 00:59:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:48.402 00:59:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.402 00:59:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:48.661 [2024-11-18 00:59:22.808577] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.661 00:59:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.661 00:59:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.661 00:59:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.661 00:59:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:48.661 00:59:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:48.661 00:59:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.661 00:59:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:48.919 [2024-11-18 00:59:23.269958] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.919 [2024-11-18 00:59:23.270326] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:48.919 00:59:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.919 00:59:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.919 00:59:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:48.919 00:59:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.178 00:59:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:49.178 00:59:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:49.178 00:59:23 -- bdev/bdev_raid.sh@287 -- # killprocess 126803 00:16:49.178 00:59:23 -- common/autotest_common.sh@936 -- # '[' -z 126803 ']' 00:16:49.178 00:59:23 -- common/autotest_common.sh@940 -- # kill -0 126803 00:16:49.178 00:59:23 -- common/autotest_common.sh@941 -- # uname 00:16:49.178 00:59:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.178 00:59:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126803 00:16:49.436 killing process with pid 126803 00:16:49.436 00:59:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:49.436 00:59:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:49.436 00:59:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126803' 00:16:49.436 00:59:23 -- common/autotest_common.sh@955 -- # kill 126803 00:16:49.436 00:59:23 -- common/autotest_common.sh@960 -- # wait 126803 00:16:49.436 [2024-11-18 00:59:23.593461] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.436 [2024-11-18 00:59:23.593576] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.695 00:59:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:49.695 00:16:49.695 real 0m11.245s 00:16:49.695 user 0m19.590s 00:16:49.695 sys 0m2.231s 00:16:49.695 00:59:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:49.695 00:59:23 -- common/autotest_common.sh@10 -- # set +x 00:16:49.695 ************************************ 00:16:49.695 END TEST raid_state_function_test 00:16:49.695 ************************************ 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:49.695 00:59:24 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:49.695 00:59:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.695 00:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:49.695 ************************************ 00:16:49.695 START TEST raid_state_function_test_sb 00:16:49.695 ************************************ 00:16:49.695 00:59:24 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=127174 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127174' 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:49.695 Process raid pid: 127174 00:16:49.695 00:59:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127174 /var/tmp/spdk-raid.sock 00:16:49.695 00:59:24 -- common/autotest_common.sh@829 -- # '[' -z 127174 ']' 00:16:49.695 00:59:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:49.695 00:59:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.695 00:59:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:49.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:49.695 00:59:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.695 00:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:49.954 [2024-11-18 00:59:24.132219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:49.954 [2024-11-18 00:59:24.132682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.954 [2024-11-18 00:59:24.277575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.212 [2024-11-18 00:59:24.364168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.212 [2024-11-18 00:59:24.443234] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.779 00:59:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.779 00:59:25 -- common/autotest_common.sh@862 -- # return 0 00:16:50.779 00:59:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.038 [2024-11-18 00:59:25.280964] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.038 [2024-11-18 00:59:25.281351] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.038 [2024-11-18 00:59:25.281452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.038 [2024-11-18 00:59:25.281507] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.038 [2024-11-18 00:59:25.281533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.038 [2024-11-18 00:59:25.281647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.038 00:59:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.296 00:59:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.296 "name": "Existed_Raid", 00:16:51.296 "uuid": "2535f60c-2b5c-4b27-ba3d-89ed78079f2c", 00:16:51.296 "strip_size_kb": 64, 00:16:51.296 "state": "configuring", 00:16:51.296 "raid_level": "concat", 00:16:51.296 "superblock": true, 00:16:51.296 "num_base_bdevs": 3, 00:16:51.296 "num_base_bdevs_discovered": 0, 00:16:51.296 "num_base_bdevs_operational": 3, 00:16:51.296 "base_bdevs_list": [ 00:16:51.296 { 00:16:51.296 "name": "BaseBdev1", 00:16:51.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.296 "is_configured": false, 00:16:51.296 "data_offset": 0, 00:16:51.296 "data_size": 0 00:16:51.296 }, 00:16:51.296 { 00:16:51.296 "name": "BaseBdev2", 00:16:51.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.296 "is_configured": false, 00:16:51.296 "data_offset": 0, 00:16:51.296 "data_size": 0 00:16:51.296 }, 00:16:51.296 { 00:16:51.296 "name": "BaseBdev3", 00:16:51.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.296 "is_configured": false, 00:16:51.296 "data_offset": 0, 00:16:51.296 "data_size": 0 00:16:51.296 } 00:16:51.296 ] 00:16:51.296 }' 00:16:51.296 00:59:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.296 00:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.863 00:59:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.121 [2024-11-18 00:59:26.289015] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.121 [2024-11-18 00:59:26.289344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:52.121 00:59:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:52.121 [2024-11-18 00:59:26.485113] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.121 [2024-11-18 00:59:26.485472] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.121 [2024-11-18 00:59:26.485557] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.121 [2024-11-18 00:59:26.485617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.121 [2024-11-18 00:59:26.485643] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.121 [2024-11-18 00:59:26.485689] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.121 00:59:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.381 [2024-11-18 00:59:26.693210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.381 BaseBdev1 00:16:52.381 00:59:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:52.381 00:59:26 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:52.381 00:59:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:52.381 00:59:26 -- common/autotest_common.sh@899 -- # local i 00:16:52.381 00:59:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:52.381 00:59:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:52.381 00:59:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.639 00:59:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.898 [ 00:16:52.898 { 00:16:52.898 "name": "BaseBdev1", 00:16:52.898 "aliases": [ 00:16:52.898 "40420626-42d4-4e48-ba75-78159fc21530" 00:16:52.898 ], 00:16:52.898 "product_name": "Malloc disk", 00:16:52.898 "block_size": 512, 00:16:52.898 "num_blocks": 65536, 00:16:52.898 "uuid": "40420626-42d4-4e48-ba75-78159fc21530", 00:16:52.898 "assigned_rate_limits": { 00:16:52.898 "rw_ios_per_sec": 0, 00:16:52.898 "rw_mbytes_per_sec": 0, 00:16:52.898 "r_mbytes_per_sec": 0, 00:16:52.898 "w_mbytes_per_sec": 0 00:16:52.898 }, 00:16:52.898 "claimed": true, 00:16:52.898 "claim_type": "exclusive_write", 00:16:52.898 "zoned": false, 00:16:52.898 "supported_io_types": { 00:16:52.898 "read": true, 00:16:52.898 "write": true, 00:16:52.898 "unmap": true, 00:16:52.898 "write_zeroes": true, 00:16:52.898 "flush": true, 00:16:52.898 "reset": true, 00:16:52.898 "compare": false, 00:16:52.898 "compare_and_write": false, 00:16:52.898 "abort": true, 00:16:52.898 "nvme_admin": false, 00:16:52.898 "nvme_io": false 00:16:52.898 }, 00:16:52.898 "memory_domains": [ 00:16:52.898 { 00:16:52.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.898 "dma_device_type": 2 00:16:52.898 } 00:16:52.898 ], 00:16:52.898 "driver_specific": {} 00:16:52.898 } 00:16:52.898 ] 00:16:52.898 00:59:27 -- common/autotest_common.sh@905 -- # return 0 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.898 00:59:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.157 00:59:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.157 "name": "Existed_Raid", 00:16:53.157 "uuid": "e75bdc05-999a-42d2-b5de-7043d03a8ba8", 00:16:53.157 "strip_size_kb": 64, 00:16:53.157 "state": "configuring", 00:16:53.157 "raid_level": "concat", 00:16:53.157 "superblock": true, 00:16:53.157 "num_base_bdevs": 3, 00:16:53.157 "num_base_bdevs_discovered": 1, 00:16:53.157 "num_base_bdevs_operational": 3, 00:16:53.157 "base_bdevs_list": [ 00:16:53.157 { 00:16:53.157 "name": "BaseBdev1", 00:16:53.157 "uuid": "40420626-42d4-4e48-ba75-78159fc21530", 00:16:53.157 "is_configured": true, 00:16:53.157 "data_offset": 2048, 00:16:53.157 "data_size": 63488 00:16:53.157 }, 00:16:53.157 { 00:16:53.157 "name": "BaseBdev2", 00:16:53.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.157 "is_configured": false, 00:16:53.157 "data_offset": 0, 00:16:53.157 "data_size": 0 00:16:53.157 }, 00:16:53.157 { 00:16:53.157 "name": "BaseBdev3", 00:16:53.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.157 "is_configured": false, 00:16:53.157 "data_offset": 0, 00:16:53.157 "data_size": 0 00:16:53.157 } 00:16:53.157 ] 00:16:53.157 }' 00:16:53.157 00:59:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.157 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.723 00:59:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.723 [2024-11-18 00:59:28.109913] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.723 [2024-11-18 00:59:28.110001] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:53.981 00:59:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:53.982 00:59:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:54.240 00:59:28 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.498 BaseBdev1 00:16:54.498 00:59:28 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:54.498 00:59:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:54.498 00:59:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:54.498 00:59:28 -- common/autotest_common.sh@899 -- # local i 00:16:54.498 00:59:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:54.498 00:59:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:54.498 00:59:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.498 00:59:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.757 [ 00:16:54.757 { 00:16:54.757 "name": "BaseBdev1", 00:16:54.757 "aliases": [ 00:16:54.757 "9e9e4927-58b9-49b0-903a-a080a7526617" 00:16:54.757 ], 00:16:54.757 "product_name": "Malloc disk", 00:16:54.757 "block_size": 512, 00:16:54.757 "num_blocks": 65536, 00:16:54.757 "uuid": "9e9e4927-58b9-49b0-903a-a080a7526617", 00:16:54.757 "assigned_rate_limits": { 00:16:54.757 "rw_ios_per_sec": 0, 00:16:54.757 "rw_mbytes_per_sec": 0, 00:16:54.757 "r_mbytes_per_sec": 0, 00:16:54.757 "w_mbytes_per_sec": 0 00:16:54.757 }, 00:16:54.757 "claimed": false, 00:16:54.757 "zoned": false, 00:16:54.757 "supported_io_types": { 00:16:54.757 "read": true, 00:16:54.757 "write": true, 00:16:54.757 "unmap": true, 00:16:54.757 "write_zeroes": true, 00:16:54.757 "flush": true, 00:16:54.757 "reset": true, 00:16:54.757 "compare": false, 00:16:54.757 "compare_and_write": false, 00:16:54.757 "abort": true, 00:16:54.757 "nvme_admin": false, 00:16:54.757 "nvme_io": false 00:16:54.757 }, 00:16:54.757 "memory_domains": [ 00:16:54.757 { 00:16:54.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.757 "dma_device_type": 2 00:16:54.757 } 00:16:54.757 ], 00:16:54.757 "driver_specific": {} 00:16:54.757 } 00:16:54.757 ] 00:16:54.757 00:59:29 -- common/autotest_common.sh@905 -- # return 0 00:16:54.757 00:59:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:55.015 [2024-11-18 00:59:29.255155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.015 [2024-11-18 00:59:29.257662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.015 [2024-11-18 00:59:29.257742] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.015 [2024-11-18 00:59:29.257752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.015 [2024-11-18 00:59:29.257778] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.015 00:59:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.272 00:59:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.272 "name": "Existed_Raid", 00:16:55.272 "uuid": "d27ecc44-3e49-4d33-85c4-4a0a4623a309", 00:16:55.272 "strip_size_kb": 64, 00:16:55.272 "state": "configuring", 00:16:55.272 "raid_level": "concat", 00:16:55.272 "superblock": true, 00:16:55.272 "num_base_bdevs": 3, 00:16:55.272 "num_base_bdevs_discovered": 1, 00:16:55.272 "num_base_bdevs_operational": 3, 00:16:55.272 "base_bdevs_list": [ 00:16:55.272 { 00:16:55.272 "name": "BaseBdev1", 00:16:55.272 "uuid": "9e9e4927-58b9-49b0-903a-a080a7526617", 00:16:55.272 "is_configured": true, 00:16:55.272 "data_offset": 2048, 00:16:55.272 "data_size": 63488 00:16:55.272 }, 00:16:55.272 { 00:16:55.272 "name": "BaseBdev2", 00:16:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.272 "is_configured": false, 00:16:55.272 "data_offset": 0, 00:16:55.272 "data_size": 0 00:16:55.272 }, 00:16:55.272 { 00:16:55.272 "name": "BaseBdev3", 00:16:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.272 "is_configured": false, 00:16:55.273 "data_offset": 0, 00:16:55.273 "data_size": 0 00:16:55.273 } 00:16:55.273 ] 00:16:55.273 }' 00:16:55.273 00:59:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.273 00:59:29 -- common/autotest_common.sh@10 -- # set +x 00:16:55.839 00:59:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.098 [2024-11-18 00:59:30.384103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.098 BaseBdev2 00:16:56.098 00:59:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:56.098 00:59:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:56.098 00:59:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.098 00:59:30 -- common/autotest_common.sh@899 -- # local i 00:16:56.098 00:59:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.098 00:59:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.098 00:59:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.356 00:59:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.614 [ 00:16:56.614 { 00:16:56.614 "name": "BaseBdev2", 00:16:56.614 "aliases": [ 00:16:56.614 "40c43973-d86f-4ce0-8330-0c3946267e0d" 00:16:56.614 ], 00:16:56.614 "product_name": "Malloc disk", 00:16:56.614 "block_size": 512, 00:16:56.614 "num_blocks": 65536, 00:16:56.614 "uuid": "40c43973-d86f-4ce0-8330-0c3946267e0d", 00:16:56.614 "assigned_rate_limits": { 00:16:56.614 "rw_ios_per_sec": 0, 00:16:56.614 "rw_mbytes_per_sec": 0, 00:16:56.614 "r_mbytes_per_sec": 0, 00:16:56.614 "w_mbytes_per_sec": 0 00:16:56.614 }, 00:16:56.614 "claimed": true, 00:16:56.614 "claim_type": "exclusive_write", 00:16:56.614 "zoned": false, 00:16:56.614 "supported_io_types": { 00:16:56.614 "read": true, 00:16:56.614 "write": true, 00:16:56.614 "unmap": true, 00:16:56.614 "write_zeroes": true, 00:16:56.614 "flush": true, 00:16:56.614 "reset": true, 00:16:56.614 "compare": false, 00:16:56.614 "compare_and_write": false, 00:16:56.614 "abort": true, 00:16:56.614 "nvme_admin": false, 00:16:56.614 "nvme_io": false 00:16:56.614 }, 00:16:56.614 "memory_domains": [ 00:16:56.614 { 00:16:56.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.614 "dma_device_type": 2 00:16:56.614 } 00:16:56.614 ], 00:16:56.614 "driver_specific": {} 00:16:56.614 } 00:16:56.614 ] 00:16:56.615 00:59:30 -- common/autotest_common.sh@905 -- # return 0 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.615 00:59:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.873 00:59:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.873 "name": "Existed_Raid", 00:16:56.873 "uuid": "d27ecc44-3e49-4d33-85c4-4a0a4623a309", 00:16:56.873 "strip_size_kb": 64, 00:16:56.873 "state": "configuring", 00:16:56.873 "raid_level": "concat", 00:16:56.873 "superblock": true, 00:16:56.873 "num_base_bdevs": 3, 00:16:56.874 "num_base_bdevs_discovered": 2, 00:16:56.874 "num_base_bdevs_operational": 3, 00:16:56.874 "base_bdevs_list": [ 00:16:56.874 { 00:16:56.874 "name": "BaseBdev1", 00:16:56.874 "uuid": "9e9e4927-58b9-49b0-903a-a080a7526617", 00:16:56.874 "is_configured": true, 00:16:56.874 "data_offset": 2048, 00:16:56.874 "data_size": 63488 00:16:56.874 }, 00:16:56.874 { 00:16:56.874 "name": "BaseBdev2", 00:16:56.874 "uuid": "40c43973-d86f-4ce0-8330-0c3946267e0d", 00:16:56.874 "is_configured": true, 00:16:56.874 "data_offset": 2048, 00:16:56.874 "data_size": 63488 00:16:56.874 }, 00:16:56.874 { 00:16:56.874 "name": "BaseBdev3", 00:16:56.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.874 "is_configured": false, 00:16:56.874 "data_offset": 0, 00:16:56.874 "data_size": 0 00:16:56.874 } 00:16:56.874 ] 00:16:56.874 }' 00:16:56.874 00:59:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.874 00:59:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.441 00:59:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.700 [2024-11-18 00:59:32.036205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.700 [2024-11-18 00:59:32.036976] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:57.700 [2024-11-18 00:59:32.037190] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.700 [2024-11-18 00:59:32.037520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:57.700 [2024-11-18 00:59:32.038110] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:57.700 [2024-11-18 00:59:32.038347] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:57.700 BaseBdev3 00:16:57.700 [2024-11-18 00:59:32.038889] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.700 00:59:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:57.700 00:59:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:57.700 00:59:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:57.700 00:59:32 -- common/autotest_common.sh@899 -- # local i 00:16:57.700 00:59:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:57.700 00:59:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:57.700 00:59:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.959 00:59:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:58.217 [ 00:16:58.217 { 00:16:58.217 "name": "BaseBdev3", 00:16:58.217 "aliases": [ 00:16:58.217 "598ea956-5de2-45ef-acac-7fd6b2f79bb3" 00:16:58.217 ], 00:16:58.217 "product_name": "Malloc disk", 00:16:58.217 "block_size": 512, 00:16:58.217 "num_blocks": 65536, 00:16:58.217 "uuid": "598ea956-5de2-45ef-acac-7fd6b2f79bb3", 00:16:58.217 "assigned_rate_limits": { 00:16:58.217 "rw_ios_per_sec": 0, 00:16:58.217 "rw_mbytes_per_sec": 0, 00:16:58.217 "r_mbytes_per_sec": 0, 00:16:58.217 "w_mbytes_per_sec": 0 00:16:58.217 }, 00:16:58.217 "claimed": true, 00:16:58.217 "claim_type": "exclusive_write", 00:16:58.217 "zoned": false, 00:16:58.217 "supported_io_types": { 00:16:58.217 "read": true, 00:16:58.217 "write": true, 00:16:58.217 "unmap": true, 00:16:58.217 "write_zeroes": true, 00:16:58.217 "flush": true, 00:16:58.217 "reset": true, 00:16:58.217 "compare": false, 00:16:58.217 "compare_and_write": false, 00:16:58.217 "abort": true, 00:16:58.217 "nvme_admin": false, 00:16:58.217 "nvme_io": false 00:16:58.217 }, 00:16:58.217 "memory_domains": [ 00:16:58.217 { 00:16:58.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.217 "dma_device_type": 2 00:16:58.217 } 00:16:58.217 ], 00:16:58.217 "driver_specific": {} 00:16:58.217 } 00:16:58.217 ] 00:16:58.217 00:59:32 -- common/autotest_common.sh@905 -- # return 0 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.217 00:59:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.476 00:59:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.476 "name": "Existed_Raid", 00:16:58.476 "uuid": "d27ecc44-3e49-4d33-85c4-4a0a4623a309", 00:16:58.476 "strip_size_kb": 64, 00:16:58.476 "state": "online", 00:16:58.476 "raid_level": "concat", 00:16:58.476 "superblock": true, 00:16:58.476 "num_base_bdevs": 3, 00:16:58.476 "num_base_bdevs_discovered": 3, 00:16:58.476 "num_base_bdevs_operational": 3, 00:16:58.476 "base_bdevs_list": [ 00:16:58.476 { 00:16:58.476 "name": "BaseBdev1", 00:16:58.476 "uuid": "9e9e4927-58b9-49b0-903a-a080a7526617", 00:16:58.476 "is_configured": true, 00:16:58.476 "data_offset": 2048, 00:16:58.476 "data_size": 63488 00:16:58.476 }, 00:16:58.476 { 00:16:58.476 "name": "BaseBdev2", 00:16:58.476 "uuid": "40c43973-d86f-4ce0-8330-0c3946267e0d", 00:16:58.476 "is_configured": true, 00:16:58.476 "data_offset": 2048, 00:16:58.476 "data_size": 63488 00:16:58.476 }, 00:16:58.476 { 00:16:58.476 "name": "BaseBdev3", 00:16:58.476 "uuid": "598ea956-5de2-45ef-acac-7fd6b2f79bb3", 00:16:58.476 "is_configured": true, 00:16:58.476 "data_offset": 2048, 00:16:58.476 "data_size": 63488 00:16:58.476 } 00:16:58.476 ] 00:16:58.476 }' 00:16:58.476 00:59:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.476 00:59:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.042 00:59:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:59.042 [2024-11-18 00:59:33.432711] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.042 [2024-11-18 00:59:33.432768] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.042 [2024-11-18 00:59:33.432852] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.300 00:59:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:59.300 00:59:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:59.300 00:59:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:59.300 00:59:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:59.300 00:59:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:59.300 00:59:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.301 "name": "Existed_Raid", 00:16:59.301 "uuid": "d27ecc44-3e49-4d33-85c4-4a0a4623a309", 00:16:59.301 "strip_size_kb": 64, 00:16:59.301 "state": "offline", 00:16:59.301 "raid_level": "concat", 00:16:59.301 "superblock": true, 00:16:59.301 "num_base_bdevs": 3, 00:16:59.301 "num_base_bdevs_discovered": 2, 00:16:59.301 "num_base_bdevs_operational": 2, 00:16:59.301 "base_bdevs_list": [ 00:16:59.301 { 00:16:59.301 "name": null, 00:16:59.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.301 "is_configured": false, 00:16:59.301 "data_offset": 2048, 00:16:59.301 "data_size": 63488 00:16:59.301 }, 00:16:59.301 { 00:16:59.301 "name": "BaseBdev2", 00:16:59.301 "uuid": "40c43973-d86f-4ce0-8330-0c3946267e0d", 00:16:59.301 "is_configured": true, 00:16:59.301 "data_offset": 2048, 00:16:59.301 "data_size": 63488 00:16:59.301 }, 00:16:59.301 { 00:16:59.301 "name": "BaseBdev3", 00:16:59.301 "uuid": "598ea956-5de2-45ef-acac-7fd6b2f79bb3", 00:16:59.301 "is_configured": true, 00:16:59.301 "data_offset": 2048, 00:16:59.301 "data_size": 63488 00:16:59.301 } 00:16:59.301 ] 00:16:59.301 }' 00:16:59.301 00:59:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.301 00:59:33 -- common/autotest_common.sh@10 -- # set +x 00:16:59.869 00:59:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:59.869 00:59:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:59.869 00:59:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.869 00:59:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:00.127 00:59:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:00.127 00:59:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.127 00:59:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:00.386 [2024-11-18 00:59:34.709255] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.386 00:59:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:00.386 00:59:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.386 00:59:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.386 00:59:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:00.644 00:59:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:00.644 00:59:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.644 00:59:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:00.904 [2024-11-18 00:59:35.118839] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:00.904 [2024-11-18 00:59:35.119384] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:00.904 00:59:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:00.904 00:59:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.904 00:59:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.904 00:59:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:01.163 00:59:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:01.163 00:59:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:01.163 00:59:35 -- bdev/bdev_raid.sh@287 -- # killprocess 127174 00:17:01.163 00:59:35 -- common/autotest_common.sh@936 -- # '[' -z 127174 ']' 00:17:01.163 00:59:35 -- common/autotest_common.sh@940 -- # kill -0 127174 00:17:01.163 00:59:35 -- common/autotest_common.sh@941 -- # uname 00:17:01.163 00:59:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.163 00:59:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127174 00:17:01.163 00:59:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:01.163 00:59:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:01.163 00:59:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127174' 00:17:01.163 killing process with pid 127174 00:17:01.163 00:59:35 -- common/autotest_common.sh@955 -- # kill 127174 00:17:01.163 [2024-11-18 00:59:35.398283] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.163 00:59:35 -- common/autotest_common.sh@960 -- # wait 127174 00:17:01.163 [2024-11-18 00:59:35.398785] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.422 00:59:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:01.422 00:17:01.422 real 0m11.740s 00:17:01.422 user 0m20.594s 00:17:01.422 sys 0m2.125s 00:17:01.422 00:59:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:01.422 00:59:35 -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 ************************************ 00:17:01.422 END TEST raid_state_function_test_sb 00:17:01.422 ************************************ 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:01.681 00:59:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:01.681 00:59:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.681 00:59:35 -- common/autotest_common.sh@10 -- # set +x 00:17:01.681 ************************************ 00:17:01.681 START TEST raid_superblock_test 00:17:01.681 ************************************ 00:17:01.681 00:59:35 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@357 -- # raid_pid=127548 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:01.681 00:59:35 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127548 /var/tmp/spdk-raid.sock 00:17:01.681 00:59:35 -- common/autotest_common.sh@829 -- # '[' -z 127548 ']' 00:17:01.681 00:59:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:01.681 00:59:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.681 00:59:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:01.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:01.681 00:59:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.681 00:59:35 -- common/autotest_common.sh@10 -- # set +x 00:17:01.681 [2024-11-18 00:59:35.953727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:01.681 [2024-11-18 00:59:35.954346] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127548 ] 00:17:01.940 [2024-11-18 00:59:36.112146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.940 [2024-11-18 00:59:36.203829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.940 [2024-11-18 00:59:36.289868] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.506 00:59:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.506 00:59:36 -- common/autotest_common.sh@862 -- # return 0 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.506 00:59:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:02.764 malloc1 00:17:02.764 00:59:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.022 [2024-11-18 00:59:37.170374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.022 [2024-11-18 00:59:37.170774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.022 [2024-11-18 00:59:37.170877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:03.022 [2024-11-18 00:59:37.171031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.022 [2024-11-18 00:59:37.174071] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.022 [2024-11-18 00:59:37.174296] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.022 pt1 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:03.022 malloc2 00:17:03.022 00:59:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.280 [2024-11-18 00:59:37.574643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.280 [2024-11-18 00:59:37.574991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.280 [2024-11-18 00:59:37.575074] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:03.280 [2024-11-18 00:59:37.575206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.280 [2024-11-18 00:59:37.578039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.280 [2024-11-18 00:59:37.578229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.280 pt2 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.280 00:59:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:03.538 malloc3 00:17:03.538 00:59:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.797 [2024-11-18 00:59:38.002630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.797 [2024-11-18 00:59:38.003029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.797 [2024-11-18 00:59:38.003115] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:03.797 [2024-11-18 00:59:38.003246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.797 [2024-11-18 00:59:38.006078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.797 [2024-11-18 00:59:38.006319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.797 pt3 00:17:03.797 00:59:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:03.797 00:59:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:03.797 00:59:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:03.797 [2024-11-18 00:59:38.194819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.797 [2024-11-18 00:59:38.197611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.797 [2024-11-18 00:59:38.197821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.797 [2024-11-18 00:59:38.198094] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:04.056 [2024-11-18 00:59:38.198236] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:04.056 [2024-11-18 00:59:38.198488] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:04.056 [2024-11-18 00:59:38.199187] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:04.056 [2024-11-18 00:59:38.199297] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:17:04.056 [2024-11-18 00:59:38.199598] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.056 00:59:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.315 00:59:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.315 "name": "raid_bdev1", 00:17:04.315 "uuid": "dbcc3c61-1273-4374-a5d8-f8c306dcdb09", 00:17:04.315 "strip_size_kb": 64, 00:17:04.315 "state": "online", 00:17:04.315 "raid_level": "concat", 00:17:04.315 "superblock": true, 00:17:04.315 "num_base_bdevs": 3, 00:17:04.315 "num_base_bdevs_discovered": 3, 00:17:04.315 "num_base_bdevs_operational": 3, 00:17:04.315 "base_bdevs_list": [ 00:17:04.315 { 00:17:04.315 "name": "pt1", 00:17:04.315 "uuid": "1b94c348-5229-55c1-a783-d23d9ac4415d", 00:17:04.315 "is_configured": true, 00:17:04.315 "data_offset": 2048, 00:17:04.315 "data_size": 63488 00:17:04.315 }, 00:17:04.315 { 00:17:04.315 "name": "pt2", 00:17:04.315 "uuid": "f5bb0aeb-03a7-5bef-b4e0-1f50b92afa33", 00:17:04.315 "is_configured": true, 00:17:04.315 "data_offset": 2048, 00:17:04.315 "data_size": 63488 00:17:04.315 }, 00:17:04.315 { 00:17:04.315 "name": "pt3", 00:17:04.315 "uuid": "5cc2783c-df93-51dd-86a7-f650e784d126", 00:17:04.315 "is_configured": true, 00:17:04.315 "data_offset": 2048, 00:17:04.315 "data_size": 63488 00:17:04.315 } 00:17:04.315 ] 00:17:04.315 }' 00:17:04.315 00:59:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.315 00:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.882 00:59:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:04.882 00:59:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.140 [2024-11-18 00:59:39.288019] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.140 00:59:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=dbcc3c61-1273-4374-a5d8-f8c306dcdb09 00:17:05.140 00:59:39 -- bdev/bdev_raid.sh@380 -- # '[' -z dbcc3c61-1273-4374-a5d8-f8c306dcdb09 ']' 00:17:05.140 00:59:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:05.140 [2024-11-18 00:59:39.487816] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.140 [2024-11-18 00:59:39.488113] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.140 [2024-11-18 00:59:39.488307] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.140 [2024-11-18 00:59:39.488439] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.140 [2024-11-18 00:59:39.488611] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:17:05.140 00:59:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.140 00:59:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:05.399 00:59:39 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:05.399 00:59:39 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:05.399 00:59:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:05.399 00:59:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:05.659 00:59:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:05.659 00:59:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:05.917 00:59:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:05.917 00:59:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:06.175 00:59:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:06.175 00:59:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.175 00:59:40 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:06.175 00:59:40 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:06.175 00:59:40 -- common/autotest_common.sh@650 -- # local es=0 00:17:06.434 00:59:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:06.434 00:59:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.434 00:59:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.434 00:59:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.434 00:59:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.434 00:59:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.434 00:59:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.434 00:59:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.434 00:59:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:06.434 00:59:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:06.434 [2024-11-18 00:59:40.788053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.434 [2024-11-18 00:59:40.790667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.434 [2024-11-18 00:59:40.790857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:06.434 [2024-11-18 00:59:40.791049] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:06.434 [2024-11-18 00:59:40.791226] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:06.434 [2024-11-18 00:59:40.791346] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:06.434 [2024-11-18 00:59:40.791469] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.434 [2024-11-18 00:59:40.791551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:17:06.434 request: 00:17:06.434 { 00:17:06.434 "name": "raid_bdev1", 00:17:06.434 "raid_level": "concat", 00:17:06.434 "base_bdevs": [ 00:17:06.434 "malloc1", 00:17:06.434 "malloc2", 00:17:06.434 "malloc3" 00:17:06.434 ], 00:17:06.434 "superblock": false, 00:17:06.434 "strip_size_kb": 64, 00:17:06.434 "method": "bdev_raid_create", 00:17:06.434 "req_id": 1 00:17:06.434 } 00:17:06.434 Got JSON-RPC error response 00:17:06.434 response: 00:17:06.434 { 00:17:06.434 "code": -17, 00:17:06.434 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.434 } 00:17:06.434 00:59:40 -- common/autotest_common.sh@653 -- # es=1 00:17:06.434 00:59:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.434 00:59:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.434 00:59:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.434 00:59:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.434 00:59:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:06.709 00:59:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:06.709 00:59:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:06.709 00:59:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.984 [2024-11-18 00:59:41.316216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.984 [2024-11-18 00:59:41.316590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.984 [2024-11-18 00:59:41.316677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:06.984 [2024-11-18 00:59:41.316792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.984 [2024-11-18 00:59:41.319597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.984 [2024-11-18 00:59:41.319771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.984 [2024-11-18 00:59:41.319968] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:06.984 [2024-11-18 00:59:41.320153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.984 pt1 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.984 00:59:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.243 00:59:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.243 "name": "raid_bdev1", 00:17:07.243 "uuid": "dbcc3c61-1273-4374-a5d8-f8c306dcdb09", 00:17:07.243 "strip_size_kb": 64, 00:17:07.243 "state": "configuring", 00:17:07.243 "raid_level": "concat", 00:17:07.243 "superblock": true, 00:17:07.243 "num_base_bdevs": 3, 00:17:07.243 "num_base_bdevs_discovered": 1, 00:17:07.243 "num_base_bdevs_operational": 3, 00:17:07.243 "base_bdevs_list": [ 00:17:07.243 { 00:17:07.243 "name": "pt1", 00:17:07.243 "uuid": "1b94c348-5229-55c1-a783-d23d9ac4415d", 00:17:07.243 "is_configured": true, 00:17:07.243 "data_offset": 2048, 00:17:07.243 "data_size": 63488 00:17:07.243 }, 00:17:07.243 { 00:17:07.243 "name": null, 00:17:07.243 "uuid": "f5bb0aeb-03a7-5bef-b4e0-1f50b92afa33", 00:17:07.243 "is_configured": false, 00:17:07.243 "data_offset": 2048, 00:17:07.243 "data_size": 63488 00:17:07.243 }, 00:17:07.243 { 00:17:07.243 "name": null, 00:17:07.243 "uuid": "5cc2783c-df93-51dd-86a7-f650e784d126", 00:17:07.243 "is_configured": false, 00:17:07.243 "data_offset": 2048, 00:17:07.243 "data_size": 63488 00:17:07.243 } 00:17:07.243 ] 00:17:07.243 }' 00:17:07.243 00:59:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.243 00:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.811 00:59:42 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:07.811 00:59:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.070 [2024-11-18 00:59:42.300662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.070 [2024-11-18 00:59:42.301052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.070 [2024-11-18 00:59:42.301143] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:08.070 [2024-11-18 00:59:42.301261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.070 [2024-11-18 00:59:42.301776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.070 [2024-11-18 00:59:42.301926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.070 [2024-11-18 00:59:42.302121] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:08.070 [2024-11-18 00:59:42.302241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.070 pt2 00:17:08.070 00:59:42 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:08.328 [2024-11-18 00:59:42.492734] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.328 00:59:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.329 00:59:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.329 00:59:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.329 00:59:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.329 "name": "raid_bdev1", 00:17:08.329 "uuid": "dbcc3c61-1273-4374-a5d8-f8c306dcdb09", 00:17:08.329 "strip_size_kb": 64, 00:17:08.329 "state": "configuring", 00:17:08.329 "raid_level": "concat", 00:17:08.329 "superblock": true, 00:17:08.329 "num_base_bdevs": 3, 00:17:08.329 "num_base_bdevs_discovered": 1, 00:17:08.329 "num_base_bdevs_operational": 3, 00:17:08.329 "base_bdevs_list": [ 00:17:08.329 { 00:17:08.329 "name": "pt1", 00:17:08.329 "uuid": "1b94c348-5229-55c1-a783-d23d9ac4415d", 00:17:08.329 "is_configured": true, 00:17:08.329 "data_offset": 2048, 00:17:08.329 "data_size": 63488 00:17:08.329 }, 00:17:08.329 { 00:17:08.329 "name": null, 00:17:08.329 "uuid": "f5bb0aeb-03a7-5bef-b4e0-1f50b92afa33", 00:17:08.329 "is_configured": false, 00:17:08.329 "data_offset": 2048, 00:17:08.329 "data_size": 63488 00:17:08.329 }, 00:17:08.329 { 00:17:08.329 "name": null, 00:17:08.329 "uuid": "5cc2783c-df93-51dd-86a7-f650e784d126", 00:17:08.329 "is_configured": false, 00:17:08.329 "data_offset": 2048, 00:17:08.329 "data_size": 63488 00:17:08.329 } 00:17:08.329 ] 00:17:08.329 }' 00:17:08.329 00:59:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.329 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:09.264 00:59:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:09.264 00:59:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:09.264 00:59:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.264 [2024-11-18 00:59:43.468861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.264 [2024-11-18 00:59:43.469662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.264 [2024-11-18 00:59:43.469742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:09.264 [2024-11-18 00:59:43.469863] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.264 [2024-11-18 00:59:43.470456] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.264 [2024-11-18 00:59:43.470613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.264 [2024-11-18 00:59:43.470819] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:09.264 [2024-11-18 00:59:43.470919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.264 pt2 00:17:09.264 00:59:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:09.264 00:59:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:09.264 00:59:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:09.522 [2024-11-18 00:59:43.732944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:09.522 [2024-11-18 00:59:43.733317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.522 [2024-11-18 00:59:43.733393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:09.522 [2024-11-18 00:59:43.733490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.522 [2024-11-18 00:59:43.734012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.522 [2024-11-18 00:59:43.734169] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:09.522 [2024-11-18 00:59:43.734367] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:09.522 [2024-11-18 00:59:43.734473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:09.522 [2024-11-18 00:59:43.734631] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:09.522 [2024-11-18 00:59:43.734773] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:09.522 [2024-11-18 00:59:43.734908] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:09.522 [2024-11-18 00:59:43.735313] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:09.522 [2024-11-18 00:59:43.735421] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:09.522 [2024-11-18 00:59:43.735599] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.523 pt3 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.523 00:59:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.781 00:59:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.781 "name": "raid_bdev1", 00:17:09.781 "uuid": "dbcc3c61-1273-4374-a5d8-f8c306dcdb09", 00:17:09.781 "strip_size_kb": 64, 00:17:09.781 "state": "online", 00:17:09.781 "raid_level": "concat", 00:17:09.781 "superblock": true, 00:17:09.781 "num_base_bdevs": 3, 00:17:09.781 "num_base_bdevs_discovered": 3, 00:17:09.781 "num_base_bdevs_operational": 3, 00:17:09.781 "base_bdevs_list": [ 00:17:09.781 { 00:17:09.781 "name": "pt1", 00:17:09.781 "uuid": "1b94c348-5229-55c1-a783-d23d9ac4415d", 00:17:09.781 "is_configured": true, 00:17:09.781 "data_offset": 2048, 00:17:09.781 "data_size": 63488 00:17:09.781 }, 00:17:09.781 { 00:17:09.781 "name": "pt2", 00:17:09.781 "uuid": "f5bb0aeb-03a7-5bef-b4e0-1f50b92afa33", 00:17:09.781 "is_configured": true, 00:17:09.781 "data_offset": 2048, 00:17:09.781 "data_size": 63488 00:17:09.781 }, 00:17:09.781 { 00:17:09.781 "name": "pt3", 00:17:09.781 "uuid": "5cc2783c-df93-51dd-86a7-f650e784d126", 00:17:09.781 "is_configured": true, 00:17:09.781 "data_offset": 2048, 00:17:09.781 "data_size": 63488 00:17:09.781 } 00:17:09.781 ] 00:17:09.781 }' 00:17:09.781 00:59:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.781 00:59:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.348 00:59:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:10.348 00:59:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:10.607 [2024-11-18 00:59:44.793377] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.607 00:59:44 -- bdev/bdev_raid.sh@430 -- # '[' dbcc3c61-1273-4374-a5d8-f8c306dcdb09 '!=' dbcc3c61-1273-4374-a5d8-f8c306dcdb09 ']' 00:17:10.607 00:59:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:10.607 00:59:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:10.607 00:59:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:10.607 00:59:44 -- bdev/bdev_raid.sh@511 -- # killprocess 127548 00:17:10.607 00:59:44 -- common/autotest_common.sh@936 -- # '[' -z 127548 ']' 00:17:10.607 00:59:44 -- common/autotest_common.sh@940 -- # kill -0 127548 00:17:10.607 00:59:44 -- common/autotest_common.sh@941 -- # uname 00:17:10.607 00:59:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.607 00:59:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127548 00:17:10.607 00:59:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:10.607 00:59:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:10.607 00:59:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127548' 00:17:10.607 killing process with pid 127548 00:17:10.607 00:59:44 -- common/autotest_common.sh@955 -- # kill 127548 00:17:10.607 [2024-11-18 00:59:44.860159] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.607 00:59:44 -- common/autotest_common.sh@960 -- # wait 127548 00:17:10.607 [2024-11-18 00:59:44.860416] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.607 [2024-11-18 00:59:44.860650] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.607 [2024-11-18 00:59:44.860743] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:10.607 [2024-11-18 00:59:44.923758] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.175 ************************************ 00:17:11.175 END TEST raid_superblock_test 00:17:11.175 ************************************ 00:17:11.175 00:59:45 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:11.176 00:17:11.176 real 0m9.440s 00:17:11.176 user 0m16.210s 00:17:11.176 sys 0m1.853s 00:17:11.176 00:59:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:11.176 00:59:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:11.176 00:59:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:11.176 00:59:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.176 00:59:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.176 ************************************ 00:17:11.176 START TEST raid_state_function_test 00:17:11.176 ************************************ 00:17:11.176 00:59:45 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=127855 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127855' 00:17:11.176 Process raid pid: 127855 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:11.176 00:59:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127855 /var/tmp/spdk-raid.sock 00:17:11.176 00:59:45 -- common/autotest_common.sh@829 -- # '[' -z 127855 ']' 00:17:11.176 00:59:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:11.176 00:59:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.176 00:59:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:11.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:11.176 00:59:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.176 00:59:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.176 [2024-11-18 00:59:45.472015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:11.176 [2024-11-18 00:59:45.472599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.434 [2024-11-18 00:59:45.626184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.434 [2024-11-18 00:59:45.706892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.434 [2024-11-18 00:59:45.786072] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.002 00:59:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.002 00:59:46 -- common/autotest_common.sh@862 -- # return 0 00:17:12.002 00:59:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:12.261 [2024-11-18 00:59:46.519803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.261 [2024-11-18 00:59:46.520113] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.261 [2024-11-18 00:59:46.520191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.261 [2024-11-18 00:59:46.520241] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.261 [2024-11-18 00:59:46.520308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:12.261 [2024-11-18 00:59:46.520440] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.261 00:59:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.520 00:59:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.520 "name": "Existed_Raid", 00:17:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.520 "strip_size_kb": 0, 00:17:12.520 "state": "configuring", 00:17:12.520 "raid_level": "raid1", 00:17:12.520 "superblock": false, 00:17:12.520 "num_base_bdevs": 3, 00:17:12.520 "num_base_bdevs_discovered": 0, 00:17:12.520 "num_base_bdevs_operational": 3, 00:17:12.520 "base_bdevs_list": [ 00:17:12.520 { 00:17:12.520 "name": "BaseBdev1", 00:17:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.520 "is_configured": false, 00:17:12.520 "data_offset": 0, 00:17:12.520 "data_size": 0 00:17:12.520 }, 00:17:12.520 { 00:17:12.520 "name": "BaseBdev2", 00:17:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.520 "is_configured": false, 00:17:12.520 "data_offset": 0, 00:17:12.520 "data_size": 0 00:17:12.520 }, 00:17:12.520 { 00:17:12.520 "name": "BaseBdev3", 00:17:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.520 "is_configured": false, 00:17:12.520 "data_offset": 0, 00:17:12.520 "data_size": 0 00:17:12.520 } 00:17:12.520 ] 00:17:12.520 }' 00:17:12.520 00:59:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.520 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:17:13.088 00:59:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.346 [2024-11-18 00:59:47.687860] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.346 [2024-11-18 00:59:47.688141] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:13.347 00:59:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:13.605 [2024-11-18 00:59:47.955985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.605 [2024-11-18 00:59:47.956327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.605 [2024-11-18 00:59:47.956412] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.605 [2024-11-18 00:59:47.956469] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.605 [2024-11-18 00:59:47.956495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.605 [2024-11-18 00:59:47.956602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.605 00:59:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:13.864 [2024-11-18 00:59:48.160089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.864 BaseBdev1 00:17:13.864 00:59:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:13.864 00:59:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:13.864 00:59:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:13.864 00:59:48 -- common/autotest_common.sh@899 -- # local i 00:17:13.864 00:59:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:13.864 00:59:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:13.864 00:59:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.123 00:59:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.381 [ 00:17:14.381 { 00:17:14.381 "name": "BaseBdev1", 00:17:14.381 "aliases": [ 00:17:14.381 "56f23d3e-472d-4dbf-bdfc-a5f2e32cc126" 00:17:14.381 ], 00:17:14.381 "product_name": "Malloc disk", 00:17:14.381 "block_size": 512, 00:17:14.381 "num_blocks": 65536, 00:17:14.381 "uuid": "56f23d3e-472d-4dbf-bdfc-a5f2e32cc126", 00:17:14.381 "assigned_rate_limits": { 00:17:14.381 "rw_ios_per_sec": 0, 00:17:14.381 "rw_mbytes_per_sec": 0, 00:17:14.381 "r_mbytes_per_sec": 0, 00:17:14.381 "w_mbytes_per_sec": 0 00:17:14.381 }, 00:17:14.381 "claimed": true, 00:17:14.381 "claim_type": "exclusive_write", 00:17:14.381 "zoned": false, 00:17:14.381 "supported_io_types": { 00:17:14.381 "read": true, 00:17:14.381 "write": true, 00:17:14.381 "unmap": true, 00:17:14.381 "write_zeroes": true, 00:17:14.381 "flush": true, 00:17:14.381 "reset": true, 00:17:14.381 "compare": false, 00:17:14.381 "compare_and_write": false, 00:17:14.381 "abort": true, 00:17:14.381 "nvme_admin": false, 00:17:14.381 "nvme_io": false 00:17:14.381 }, 00:17:14.381 "memory_domains": [ 00:17:14.381 { 00:17:14.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.381 "dma_device_type": 2 00:17:14.381 } 00:17:14.381 ], 00:17:14.381 "driver_specific": {} 00:17:14.381 } 00:17:14.381 ] 00:17:14.381 00:59:48 -- common/autotest_common.sh@905 -- # return 0 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.382 00:59:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.640 00:59:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.640 "name": "Existed_Raid", 00:17:14.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.640 "strip_size_kb": 0, 00:17:14.640 "state": "configuring", 00:17:14.640 "raid_level": "raid1", 00:17:14.640 "superblock": false, 00:17:14.640 "num_base_bdevs": 3, 00:17:14.640 "num_base_bdevs_discovered": 1, 00:17:14.640 "num_base_bdevs_operational": 3, 00:17:14.640 "base_bdevs_list": [ 00:17:14.640 { 00:17:14.640 "name": "BaseBdev1", 00:17:14.640 "uuid": "56f23d3e-472d-4dbf-bdfc-a5f2e32cc126", 00:17:14.640 "is_configured": true, 00:17:14.640 "data_offset": 0, 00:17:14.640 "data_size": 65536 00:17:14.640 }, 00:17:14.640 { 00:17:14.640 "name": "BaseBdev2", 00:17:14.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.640 "is_configured": false, 00:17:14.640 "data_offset": 0, 00:17:14.640 "data_size": 0 00:17:14.640 }, 00:17:14.640 { 00:17:14.640 "name": "BaseBdev3", 00:17:14.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.640 "is_configured": false, 00:17:14.640 "data_offset": 0, 00:17:14.640 "data_size": 0 00:17:14.640 } 00:17:14.640 ] 00:17:14.640 }' 00:17:14.640 00:59:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.640 00:59:48 -- common/autotest_common.sh@10 -- # set +x 00:17:15.206 00:59:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.464 [2024-11-18 00:59:49.624402] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.464 [2024-11-18 00:59:49.624733] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:15.464 [2024-11-18 00:59:49.824559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.464 [2024-11-18 00:59:49.827297] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.464 [2024-11-18 00:59:49.827494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.464 [2024-11-18 00:59:49.827581] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:15.464 [2024-11-18 00:59:49.827642] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.464 00:59:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.723 00:59:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.723 "name": "Existed_Raid", 00:17:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.723 "strip_size_kb": 0, 00:17:15.723 "state": "configuring", 00:17:15.723 "raid_level": "raid1", 00:17:15.723 "superblock": false, 00:17:15.723 "num_base_bdevs": 3, 00:17:15.723 "num_base_bdevs_discovered": 1, 00:17:15.723 "num_base_bdevs_operational": 3, 00:17:15.723 "base_bdevs_list": [ 00:17:15.723 { 00:17:15.723 "name": "BaseBdev1", 00:17:15.723 "uuid": "56f23d3e-472d-4dbf-bdfc-a5f2e32cc126", 00:17:15.723 "is_configured": true, 00:17:15.723 "data_offset": 0, 00:17:15.723 "data_size": 65536 00:17:15.723 }, 00:17:15.723 { 00:17:15.723 "name": "BaseBdev2", 00:17:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.723 "is_configured": false, 00:17:15.723 "data_offset": 0, 00:17:15.723 "data_size": 0 00:17:15.723 }, 00:17:15.723 { 00:17:15.723 "name": "BaseBdev3", 00:17:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.723 "is_configured": false, 00:17:15.723 "data_offset": 0, 00:17:15.723 "data_size": 0 00:17:15.723 } 00:17:15.723 ] 00:17:15.723 }' 00:17:15.723 00:59:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.723 00:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.290 00:59:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:16.549 [2024-11-18 00:59:50.835863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.549 BaseBdev2 00:17:16.549 00:59:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:16.549 00:59:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:16.549 00:59:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:16.549 00:59:50 -- common/autotest_common.sh@899 -- # local i 00:17:16.549 00:59:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:16.549 00:59:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:16.549 00:59:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.807 00:59:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.066 [ 00:17:17.066 { 00:17:17.066 "name": "BaseBdev2", 00:17:17.066 "aliases": [ 00:17:17.066 "086c341d-275e-4931-acbc-e6e8d0064b27" 00:17:17.066 ], 00:17:17.066 "product_name": "Malloc disk", 00:17:17.066 "block_size": 512, 00:17:17.066 "num_blocks": 65536, 00:17:17.066 "uuid": "086c341d-275e-4931-acbc-e6e8d0064b27", 00:17:17.066 "assigned_rate_limits": { 00:17:17.066 "rw_ios_per_sec": 0, 00:17:17.066 "rw_mbytes_per_sec": 0, 00:17:17.066 "r_mbytes_per_sec": 0, 00:17:17.066 "w_mbytes_per_sec": 0 00:17:17.066 }, 00:17:17.066 "claimed": true, 00:17:17.066 "claim_type": "exclusive_write", 00:17:17.066 "zoned": false, 00:17:17.066 "supported_io_types": { 00:17:17.066 "read": true, 00:17:17.066 "write": true, 00:17:17.066 "unmap": true, 00:17:17.066 "write_zeroes": true, 00:17:17.066 "flush": true, 00:17:17.066 "reset": true, 00:17:17.066 "compare": false, 00:17:17.066 "compare_and_write": false, 00:17:17.066 "abort": true, 00:17:17.066 "nvme_admin": false, 00:17:17.066 "nvme_io": false 00:17:17.066 }, 00:17:17.066 "memory_domains": [ 00:17:17.066 { 00:17:17.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.066 "dma_device_type": 2 00:17:17.066 } 00:17:17.066 ], 00:17:17.066 "driver_specific": {} 00:17:17.066 } 00:17:17.066 ] 00:17:17.066 00:59:51 -- common/autotest_common.sh@905 -- # return 0 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.066 00:59:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.325 00:59:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.325 "name": "Existed_Raid", 00:17:17.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.325 "strip_size_kb": 0, 00:17:17.325 "state": "configuring", 00:17:17.325 "raid_level": "raid1", 00:17:17.325 "superblock": false, 00:17:17.325 "num_base_bdevs": 3, 00:17:17.325 "num_base_bdevs_discovered": 2, 00:17:17.325 "num_base_bdevs_operational": 3, 00:17:17.325 "base_bdevs_list": [ 00:17:17.325 { 00:17:17.325 "name": "BaseBdev1", 00:17:17.325 "uuid": "56f23d3e-472d-4dbf-bdfc-a5f2e32cc126", 00:17:17.325 "is_configured": true, 00:17:17.325 "data_offset": 0, 00:17:17.325 "data_size": 65536 00:17:17.325 }, 00:17:17.325 { 00:17:17.325 "name": "BaseBdev2", 00:17:17.325 "uuid": "086c341d-275e-4931-acbc-e6e8d0064b27", 00:17:17.325 "is_configured": true, 00:17:17.325 "data_offset": 0, 00:17:17.325 "data_size": 65536 00:17:17.325 }, 00:17:17.325 { 00:17:17.325 "name": "BaseBdev3", 00:17:17.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.325 "is_configured": false, 00:17:17.325 "data_offset": 0, 00:17:17.325 "data_size": 0 00:17:17.325 } 00:17:17.325 ] 00:17:17.325 }' 00:17:17.325 00:59:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.325 00:59:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.892 00:59:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:18.151 [2024-11-18 00:59:52.397822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.151 [2024-11-18 00:59:52.398170] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:18.151 [2024-11-18 00:59:52.398233] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:18.151 [2024-11-18 00:59:52.398504] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:17:18.151 [2024-11-18 00:59:52.399070] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:18.151 [2024-11-18 00:59:52.399191] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:18.151 [2024-11-18 00:59:52.399584] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.151 BaseBdev3 00:17:18.151 00:59:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:18.151 00:59:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:18.151 00:59:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.151 00:59:52 -- common/autotest_common.sh@899 -- # local i 00:17:18.151 00:59:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.151 00:59:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.151 00:59:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.410 00:59:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:18.668 [ 00:17:18.668 { 00:17:18.668 "name": "BaseBdev3", 00:17:18.668 "aliases": [ 00:17:18.668 "fc2e3cf1-9492-4b7a-9acb-0a1a0b8e3d02" 00:17:18.668 ], 00:17:18.668 "product_name": "Malloc disk", 00:17:18.668 "block_size": 512, 00:17:18.668 "num_blocks": 65536, 00:17:18.668 "uuid": "fc2e3cf1-9492-4b7a-9acb-0a1a0b8e3d02", 00:17:18.668 "assigned_rate_limits": { 00:17:18.668 "rw_ios_per_sec": 0, 00:17:18.668 "rw_mbytes_per_sec": 0, 00:17:18.668 "r_mbytes_per_sec": 0, 00:17:18.668 "w_mbytes_per_sec": 0 00:17:18.668 }, 00:17:18.668 "claimed": true, 00:17:18.668 "claim_type": "exclusive_write", 00:17:18.668 "zoned": false, 00:17:18.668 "supported_io_types": { 00:17:18.668 "read": true, 00:17:18.668 "write": true, 00:17:18.668 "unmap": true, 00:17:18.668 "write_zeroes": true, 00:17:18.668 "flush": true, 00:17:18.668 "reset": true, 00:17:18.668 "compare": false, 00:17:18.668 "compare_and_write": false, 00:17:18.668 "abort": true, 00:17:18.668 "nvme_admin": false, 00:17:18.668 "nvme_io": false 00:17:18.668 }, 00:17:18.668 "memory_domains": [ 00:17:18.668 { 00:17:18.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.668 "dma_device_type": 2 00:17:18.668 } 00:17:18.668 ], 00:17:18.668 "driver_specific": {} 00:17:18.668 } 00:17:18.668 ] 00:17:18.668 00:59:52 -- common/autotest_common.sh@905 -- # return 0 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.668 00:59:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.926 00:59:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.926 "name": "Existed_Raid", 00:17:18.926 "uuid": "f567d703-e5e5-4bc2-a5c5-29934c27dd76", 00:17:18.926 "strip_size_kb": 0, 00:17:18.926 "state": "online", 00:17:18.926 "raid_level": "raid1", 00:17:18.926 "superblock": false, 00:17:18.926 "num_base_bdevs": 3, 00:17:18.926 "num_base_bdevs_discovered": 3, 00:17:18.926 "num_base_bdevs_operational": 3, 00:17:18.926 "base_bdevs_list": [ 00:17:18.926 { 00:17:18.926 "name": "BaseBdev1", 00:17:18.926 "uuid": "56f23d3e-472d-4dbf-bdfc-a5f2e32cc126", 00:17:18.926 "is_configured": true, 00:17:18.926 "data_offset": 0, 00:17:18.926 "data_size": 65536 00:17:18.926 }, 00:17:18.926 { 00:17:18.926 "name": "BaseBdev2", 00:17:18.926 "uuid": "086c341d-275e-4931-acbc-e6e8d0064b27", 00:17:18.926 "is_configured": true, 00:17:18.926 "data_offset": 0, 00:17:18.926 "data_size": 65536 00:17:18.926 }, 00:17:18.926 { 00:17:18.926 "name": "BaseBdev3", 00:17:18.926 "uuid": "fc2e3cf1-9492-4b7a-9acb-0a1a0b8e3d02", 00:17:18.926 "is_configured": true, 00:17:18.926 "data_offset": 0, 00:17:18.926 "data_size": 65536 00:17:18.926 } 00:17:18.926 ] 00:17:18.926 }' 00:17:18.926 00:59:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.926 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.493 00:59:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:19.493 [2024-11-18 00:59:53.874317] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.751 00:59:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.751 00:59:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.751 "name": "Existed_Raid", 00:17:19.751 "uuid": "f567d703-e5e5-4bc2-a5c5-29934c27dd76", 00:17:19.751 "strip_size_kb": 0, 00:17:19.751 "state": "online", 00:17:19.751 "raid_level": "raid1", 00:17:19.751 "superblock": false, 00:17:19.751 "num_base_bdevs": 3, 00:17:19.751 "num_base_bdevs_discovered": 2, 00:17:19.751 "num_base_bdevs_operational": 2, 00:17:19.751 "base_bdevs_list": [ 00:17:19.751 { 00:17:19.751 "name": null, 00:17:19.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.751 "is_configured": false, 00:17:19.751 "data_offset": 0, 00:17:19.751 "data_size": 65536 00:17:19.751 }, 00:17:19.751 { 00:17:19.751 "name": "BaseBdev2", 00:17:19.751 "uuid": "086c341d-275e-4931-acbc-e6e8d0064b27", 00:17:19.751 "is_configured": true, 00:17:19.751 "data_offset": 0, 00:17:19.751 "data_size": 65536 00:17:19.751 }, 00:17:19.751 { 00:17:19.751 "name": "BaseBdev3", 00:17:19.751 "uuid": "fc2e3cf1-9492-4b7a-9acb-0a1a0b8e3d02", 00:17:19.751 "is_configured": true, 00:17:19.751 "data_offset": 0, 00:17:19.751 "data_size": 65536 00:17:19.751 } 00:17:19.751 ] 00:17:19.751 }' 00:17:19.751 00:59:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.751 00:59:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.687 00:59:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:20.687 00:59:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.687 00:59:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:20.687 00:59:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.687 00:59:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:20.687 00:59:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.687 00:59:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:20.946 [2024-11-18 00:59:55.108515] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.946 00:59:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:20.946 00:59:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.946 00:59:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.946 00:59:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:20.946 00:59:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:20.946 00:59:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.946 00:59:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:21.205 [2024-11-18 00:59:55.573893] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:21.205 [2024-11-18 00:59:55.574203] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.205 [2024-11-18 00:59:55.574451] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.205 [2024-11-18 00:59:55.595766] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.205 [2024-11-18 00:59:55.596019] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:21.464 00:59:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:21.464 00:59:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:21.464 00:59:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.464 00:59:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.723 00:59:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:21.723 00:59:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:21.723 00:59:55 -- bdev/bdev_raid.sh@287 -- # killprocess 127855 00:17:21.723 00:59:55 -- common/autotest_common.sh@936 -- # '[' -z 127855 ']' 00:17:21.723 00:59:55 -- common/autotest_common.sh@940 -- # kill -0 127855 00:17:21.724 00:59:55 -- common/autotest_common.sh@941 -- # uname 00:17:21.724 00:59:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.724 00:59:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127855 00:17:21.724 killing process with pid 127855 00:17:21.724 00:59:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:21.724 00:59:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:21.724 00:59:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127855' 00:17:21.724 00:59:55 -- common/autotest_common.sh@955 -- # kill 127855 00:17:21.724 [2024-11-18 00:59:55.939666] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.724 00:59:55 -- common/autotest_common.sh@960 -- # wait 127855 00:17:21.724 [2024-11-18 00:59:55.939775] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.982 00:59:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:21.982 00:17:21.982 real 0m10.942s 00:17:21.982 user 0m19.229s 00:17:21.982 sys 0m2.013s 00:17:21.982 00:59:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:21.982 00:59:56 -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 ************************************ 00:17:21.982 END TEST raid_state_function_test 00:17:21.982 ************************************ 00:17:22.241 00:59:56 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:22.241 00:59:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:22.241 00:59:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.241 00:59:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.241 ************************************ 00:17:22.241 START TEST raid_state_function_test_sb 00:17:22.241 ************************************ 00:17:22.241 00:59:56 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true 00:17:22.241 00:59:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:22.241 00:59:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:22.241 00:59:56 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:22.241 00:59:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:22.241 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:22.241 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=128211 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128211' 00:17:22.242 Process raid pid: 128211 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128211 /var/tmp/spdk-raid.sock 00:17:22.242 00:59:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:22.242 00:59:56 -- common/autotest_common.sh@829 -- # '[' -z 128211 ']' 00:17:22.242 00:59:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.242 00:59:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.242 00:59:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.242 00:59:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.242 00:59:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.242 [2024-11-18 00:59:56.479991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:22.242 [2024-11-18 00:59:56.481169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.242 [2024-11-18 00:59:56.625464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.501 [2024-11-18 00:59:56.711968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.501 [2024-11-18 00:59:56.790762] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.067 00:59:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.067 00:59:57 -- common/autotest_common.sh@862 -- # return 0 00:17:23.067 00:59:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:23.326 [2024-11-18 00:59:57.696719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.326 [2024-11-18 00:59:57.697080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.326 [2024-11-18 00:59:57.697169] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.326 [2024-11-18 00:59:57.697222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.326 [2024-11-18 00:59:57.697248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.326 [2024-11-18 00:59:57.697321] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.326 00:59:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.584 00:59:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.584 "name": "Existed_Raid", 00:17:23.584 "uuid": "bfc2eddd-b37f-48f5-8495-ec520bd4c354", 00:17:23.584 "strip_size_kb": 0, 00:17:23.584 "state": "configuring", 00:17:23.585 "raid_level": "raid1", 00:17:23.585 "superblock": true, 00:17:23.585 "num_base_bdevs": 3, 00:17:23.585 "num_base_bdevs_discovered": 0, 00:17:23.585 "num_base_bdevs_operational": 3, 00:17:23.585 "base_bdevs_list": [ 00:17:23.585 { 00:17:23.585 "name": "BaseBdev1", 00:17:23.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.585 "is_configured": false, 00:17:23.585 "data_offset": 0, 00:17:23.585 "data_size": 0 00:17:23.585 }, 00:17:23.585 { 00:17:23.585 "name": "BaseBdev2", 00:17:23.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.585 "is_configured": false, 00:17:23.585 "data_offset": 0, 00:17:23.585 "data_size": 0 00:17:23.585 }, 00:17:23.585 { 00:17:23.585 "name": "BaseBdev3", 00:17:23.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.585 "is_configured": false, 00:17:23.585 "data_offset": 0, 00:17:23.585 "data_size": 0 00:17:23.585 } 00:17:23.585 ] 00:17:23.585 }' 00:17:23.585 00:59:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.585 00:59:57 -- common/autotest_common.sh@10 -- # set +x 00:17:24.519 00:59:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.519 [2024-11-18 00:59:58.824761] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.519 [2024-11-18 00:59:58.825069] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:24.519 00:59:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:24.777 [2024-11-18 00:59:59.008897] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.777 [2024-11-18 00:59:59.009245] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.777 [2024-11-18 00:59:59.009325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.777 [2024-11-18 00:59:59.009383] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.777 [2024-11-18 00:59:59.009409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.777 [2024-11-18 00:59:59.009457] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.777 00:59:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:25.034 [2024-11-18 00:59:59.209034] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.034 BaseBdev1 00:17:25.034 00:59:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:25.034 00:59:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:25.034 00:59:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:25.034 00:59:59 -- common/autotest_common.sh@899 -- # local i 00:17:25.034 00:59:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:25.034 00:59:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:25.034 00:59:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.291 00:59:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.549 [ 00:17:25.549 { 00:17:25.549 "name": "BaseBdev1", 00:17:25.549 "aliases": [ 00:17:25.549 "8d7fee0e-0427-48ee-a614-7f2029cb01a1" 00:17:25.549 ], 00:17:25.549 "product_name": "Malloc disk", 00:17:25.549 "block_size": 512, 00:17:25.549 "num_blocks": 65536, 00:17:25.549 "uuid": "8d7fee0e-0427-48ee-a614-7f2029cb01a1", 00:17:25.549 "assigned_rate_limits": { 00:17:25.549 "rw_ios_per_sec": 0, 00:17:25.549 "rw_mbytes_per_sec": 0, 00:17:25.549 "r_mbytes_per_sec": 0, 00:17:25.549 "w_mbytes_per_sec": 0 00:17:25.549 }, 00:17:25.549 "claimed": true, 00:17:25.549 "claim_type": "exclusive_write", 00:17:25.549 "zoned": false, 00:17:25.549 "supported_io_types": { 00:17:25.549 "read": true, 00:17:25.549 "write": true, 00:17:25.549 "unmap": true, 00:17:25.549 "write_zeroes": true, 00:17:25.549 "flush": true, 00:17:25.549 "reset": true, 00:17:25.549 "compare": false, 00:17:25.549 "compare_and_write": false, 00:17:25.549 "abort": true, 00:17:25.549 "nvme_admin": false, 00:17:25.549 "nvme_io": false 00:17:25.549 }, 00:17:25.549 "memory_domains": [ 00:17:25.549 { 00:17:25.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.549 "dma_device_type": 2 00:17:25.549 } 00:17:25.549 ], 00:17:25.549 "driver_specific": {} 00:17:25.549 } 00:17:25.549 ] 00:17:25.549 00:59:59 -- common/autotest_common.sh@905 -- # return 0 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.549 00:59:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.807 00:59:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.807 "name": "Existed_Raid", 00:17:25.807 "uuid": "b37f8683-ac73-4ff7-bbfc-5c7cbbc6b465", 00:17:25.807 "strip_size_kb": 0, 00:17:25.807 "state": "configuring", 00:17:25.807 "raid_level": "raid1", 00:17:25.807 "superblock": true, 00:17:25.807 "num_base_bdevs": 3, 00:17:25.807 "num_base_bdevs_discovered": 1, 00:17:25.807 "num_base_bdevs_operational": 3, 00:17:25.807 "base_bdevs_list": [ 00:17:25.807 { 00:17:25.807 "name": "BaseBdev1", 00:17:25.807 "uuid": "8d7fee0e-0427-48ee-a614-7f2029cb01a1", 00:17:25.807 "is_configured": true, 00:17:25.807 "data_offset": 2048, 00:17:25.807 "data_size": 63488 00:17:25.807 }, 00:17:25.807 { 00:17:25.807 "name": "BaseBdev2", 00:17:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.807 "is_configured": false, 00:17:25.807 "data_offset": 0, 00:17:25.807 "data_size": 0 00:17:25.807 }, 00:17:25.807 { 00:17:25.807 "name": "BaseBdev3", 00:17:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.807 "is_configured": false, 00:17:25.807 "data_offset": 0, 00:17:25.807 "data_size": 0 00:17:25.807 } 00:17:25.807 ] 00:17:25.807 }' 00:17:25.807 00:59:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.807 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:17:26.372 01:00:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.630 [2024-11-18 01:00:00.841417] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.630 [2024-11-18 01:00:00.841745] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:26.630 01:00:00 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:26.630 01:00:00 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.887 01:00:01 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.144 BaseBdev1 00:17:27.144 01:00:01 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:27.144 01:00:01 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:27.144 01:00:01 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.144 01:00:01 -- common/autotest_common.sh@899 -- # local i 00:17:27.144 01:00:01 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.144 01:00:01 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.144 01:00:01 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.144 01:00:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:27.402 [ 00:17:27.402 { 00:17:27.402 "name": "BaseBdev1", 00:17:27.402 "aliases": [ 00:17:27.402 "0e866307-56cd-4f1d-afc3-b48ac9aca87d" 00:17:27.402 ], 00:17:27.402 "product_name": "Malloc disk", 00:17:27.402 "block_size": 512, 00:17:27.402 "num_blocks": 65536, 00:17:27.402 "uuid": "0e866307-56cd-4f1d-afc3-b48ac9aca87d", 00:17:27.402 "assigned_rate_limits": { 00:17:27.402 "rw_ios_per_sec": 0, 00:17:27.402 "rw_mbytes_per_sec": 0, 00:17:27.402 "r_mbytes_per_sec": 0, 00:17:27.402 "w_mbytes_per_sec": 0 00:17:27.402 }, 00:17:27.402 "claimed": false, 00:17:27.402 "zoned": false, 00:17:27.402 "supported_io_types": { 00:17:27.402 "read": true, 00:17:27.402 "write": true, 00:17:27.402 "unmap": true, 00:17:27.402 "write_zeroes": true, 00:17:27.402 "flush": true, 00:17:27.402 "reset": true, 00:17:27.402 "compare": false, 00:17:27.402 "compare_and_write": false, 00:17:27.402 "abort": true, 00:17:27.402 "nvme_admin": false, 00:17:27.402 "nvme_io": false 00:17:27.402 }, 00:17:27.402 "memory_domains": [ 00:17:27.402 { 00:17:27.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.402 "dma_device_type": 2 00:17:27.402 } 00:17:27.402 ], 00:17:27.402 "driver_specific": {} 00:17:27.402 } 00:17:27.402 ] 00:17:27.402 01:00:01 -- common/autotest_common.sh@905 -- # return 0 00:17:27.402 01:00:01 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:27.660 [2024-11-18 01:00:01.966377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.660 [2024-11-18 01:00:01.969086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.660 [2024-11-18 01:00:01.969287] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.660 [2024-11-18 01:00:01.969369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:27.660 [2024-11-18 01:00:01.969429] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.660 01:00:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.918 01:00:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.918 "name": "Existed_Raid", 00:17:27.918 "uuid": "acf280aa-3b35-402c-944a-1783f19c721e", 00:17:27.918 "strip_size_kb": 0, 00:17:27.918 "state": "configuring", 00:17:27.918 "raid_level": "raid1", 00:17:27.918 "superblock": true, 00:17:27.918 "num_base_bdevs": 3, 00:17:27.918 "num_base_bdevs_discovered": 1, 00:17:27.918 "num_base_bdevs_operational": 3, 00:17:27.918 "base_bdevs_list": [ 00:17:27.918 { 00:17:27.918 "name": "BaseBdev1", 00:17:27.919 "uuid": "0e866307-56cd-4f1d-afc3-b48ac9aca87d", 00:17:27.919 "is_configured": true, 00:17:27.919 "data_offset": 2048, 00:17:27.919 "data_size": 63488 00:17:27.919 }, 00:17:27.919 { 00:17:27.919 "name": "BaseBdev2", 00:17:27.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.919 "is_configured": false, 00:17:27.919 "data_offset": 0, 00:17:27.919 "data_size": 0 00:17:27.919 }, 00:17:27.919 { 00:17:27.919 "name": "BaseBdev3", 00:17:27.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.919 "is_configured": false, 00:17:27.919 "data_offset": 0, 00:17:27.919 "data_size": 0 00:17:27.919 } 00:17:27.919 ] 00:17:27.919 }' 00:17:27.919 01:00:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.919 01:00:02 -- common/autotest_common.sh@10 -- # set +x 00:17:28.485 01:00:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:28.744 [2024-11-18 01:00:03.076445] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.744 BaseBdev2 00:17:28.744 01:00:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:28.744 01:00:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:28.744 01:00:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:28.744 01:00:03 -- common/autotest_common.sh@899 -- # local i 00:17:28.744 01:00:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:28.744 01:00:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:28.744 01:00:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.002 01:00:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.260 [ 00:17:29.260 { 00:17:29.260 "name": "BaseBdev2", 00:17:29.260 "aliases": [ 00:17:29.260 "4ebf02aa-1162-4f1c-90a9-2386c8cb4e05" 00:17:29.260 ], 00:17:29.260 "product_name": "Malloc disk", 00:17:29.260 "block_size": 512, 00:17:29.260 "num_blocks": 65536, 00:17:29.260 "uuid": "4ebf02aa-1162-4f1c-90a9-2386c8cb4e05", 00:17:29.260 "assigned_rate_limits": { 00:17:29.260 "rw_ios_per_sec": 0, 00:17:29.260 "rw_mbytes_per_sec": 0, 00:17:29.260 "r_mbytes_per_sec": 0, 00:17:29.260 "w_mbytes_per_sec": 0 00:17:29.260 }, 00:17:29.260 "claimed": true, 00:17:29.260 "claim_type": "exclusive_write", 00:17:29.260 "zoned": false, 00:17:29.260 "supported_io_types": { 00:17:29.260 "read": true, 00:17:29.260 "write": true, 00:17:29.260 "unmap": true, 00:17:29.260 "write_zeroes": true, 00:17:29.260 "flush": true, 00:17:29.260 "reset": true, 00:17:29.260 "compare": false, 00:17:29.260 "compare_and_write": false, 00:17:29.260 "abort": true, 00:17:29.260 "nvme_admin": false, 00:17:29.260 "nvme_io": false 00:17:29.260 }, 00:17:29.260 "memory_domains": [ 00:17:29.260 { 00:17:29.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.260 "dma_device_type": 2 00:17:29.260 } 00:17:29.260 ], 00:17:29.260 "driver_specific": {} 00:17:29.260 } 00:17:29.260 ] 00:17:29.260 01:00:03 -- common/autotest_common.sh@905 -- # return 0 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.260 01:00:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.518 01:00:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.518 "name": "Existed_Raid", 00:17:29.518 "uuid": "acf280aa-3b35-402c-944a-1783f19c721e", 00:17:29.518 "strip_size_kb": 0, 00:17:29.518 "state": "configuring", 00:17:29.518 "raid_level": "raid1", 00:17:29.518 "superblock": true, 00:17:29.518 "num_base_bdevs": 3, 00:17:29.518 "num_base_bdevs_discovered": 2, 00:17:29.518 "num_base_bdevs_operational": 3, 00:17:29.518 "base_bdevs_list": [ 00:17:29.518 { 00:17:29.518 "name": "BaseBdev1", 00:17:29.518 "uuid": "0e866307-56cd-4f1d-afc3-b48ac9aca87d", 00:17:29.518 "is_configured": true, 00:17:29.518 "data_offset": 2048, 00:17:29.518 "data_size": 63488 00:17:29.518 }, 00:17:29.518 { 00:17:29.518 "name": "BaseBdev2", 00:17:29.518 "uuid": "4ebf02aa-1162-4f1c-90a9-2386c8cb4e05", 00:17:29.518 "is_configured": true, 00:17:29.518 "data_offset": 2048, 00:17:29.518 "data_size": 63488 00:17:29.518 }, 00:17:29.518 { 00:17:29.518 "name": "BaseBdev3", 00:17:29.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.518 "is_configured": false, 00:17:29.518 "data_offset": 0, 00:17:29.518 "data_size": 0 00:17:29.518 } 00:17:29.518 ] 00:17:29.518 }' 00:17:29.518 01:00:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.518 01:00:03 -- common/autotest_common.sh@10 -- # set +x 00:17:30.113 01:00:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:30.377 [2024-11-18 01:00:04.659051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.377 [2024-11-18 01:00:04.659687] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:17:30.377 [2024-11-18 01:00:04.659836] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:30.377 [2024-11-18 01:00:04.660097] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:30.377 [2024-11-18 01:00:04.660726] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:17:30.377 [2024-11-18 01:00:04.660867] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:17:30.377 [2024-11-18 01:00:04.661159] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.377 BaseBdev3 00:17:30.377 01:00:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:30.377 01:00:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:30.377 01:00:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:30.377 01:00:04 -- common/autotest_common.sh@899 -- # local i 00:17:30.377 01:00:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:30.377 01:00:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:30.377 01:00:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.634 01:00:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:30.891 [ 00:17:30.892 { 00:17:30.892 "name": "BaseBdev3", 00:17:30.892 "aliases": [ 00:17:30.892 "d0fc5e3e-a3a6-45a3-b0a3-db4b2a23599d" 00:17:30.892 ], 00:17:30.892 "product_name": "Malloc disk", 00:17:30.892 "block_size": 512, 00:17:30.892 "num_blocks": 65536, 00:17:30.892 "uuid": "d0fc5e3e-a3a6-45a3-b0a3-db4b2a23599d", 00:17:30.892 "assigned_rate_limits": { 00:17:30.892 "rw_ios_per_sec": 0, 00:17:30.892 "rw_mbytes_per_sec": 0, 00:17:30.892 "r_mbytes_per_sec": 0, 00:17:30.892 "w_mbytes_per_sec": 0 00:17:30.892 }, 00:17:30.892 "claimed": true, 00:17:30.892 "claim_type": "exclusive_write", 00:17:30.892 "zoned": false, 00:17:30.892 "supported_io_types": { 00:17:30.892 "read": true, 00:17:30.892 "write": true, 00:17:30.892 "unmap": true, 00:17:30.892 "write_zeroes": true, 00:17:30.892 "flush": true, 00:17:30.892 "reset": true, 00:17:30.892 "compare": false, 00:17:30.892 "compare_and_write": false, 00:17:30.892 "abort": true, 00:17:30.892 "nvme_admin": false, 00:17:30.892 "nvme_io": false 00:17:30.892 }, 00:17:30.892 "memory_domains": [ 00:17:30.892 { 00:17:30.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.892 "dma_device_type": 2 00:17:30.892 } 00:17:30.892 ], 00:17:30.892 "driver_specific": {} 00:17:30.892 } 00:17:30.892 ] 00:17:30.892 01:00:05 -- common/autotest_common.sh@905 -- # return 0 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.892 01:00:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.149 01:00:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.149 "name": "Existed_Raid", 00:17:31.149 "uuid": "acf280aa-3b35-402c-944a-1783f19c721e", 00:17:31.149 "strip_size_kb": 0, 00:17:31.149 "state": "online", 00:17:31.149 "raid_level": "raid1", 00:17:31.149 "superblock": true, 00:17:31.149 "num_base_bdevs": 3, 00:17:31.149 "num_base_bdevs_discovered": 3, 00:17:31.149 "num_base_bdevs_operational": 3, 00:17:31.149 "base_bdevs_list": [ 00:17:31.149 { 00:17:31.149 "name": "BaseBdev1", 00:17:31.150 "uuid": "0e866307-56cd-4f1d-afc3-b48ac9aca87d", 00:17:31.150 "is_configured": true, 00:17:31.150 "data_offset": 2048, 00:17:31.150 "data_size": 63488 00:17:31.150 }, 00:17:31.150 { 00:17:31.150 "name": "BaseBdev2", 00:17:31.150 "uuid": "4ebf02aa-1162-4f1c-90a9-2386c8cb4e05", 00:17:31.150 "is_configured": true, 00:17:31.150 "data_offset": 2048, 00:17:31.150 "data_size": 63488 00:17:31.150 }, 00:17:31.150 { 00:17:31.150 "name": "BaseBdev3", 00:17:31.150 "uuid": "d0fc5e3e-a3a6-45a3-b0a3-db4b2a23599d", 00:17:31.150 "is_configured": true, 00:17:31.150 "data_offset": 2048, 00:17:31.150 "data_size": 63488 00:17:31.150 } 00:17:31.150 ] 00:17:31.150 }' 00:17:31.150 01:00:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.150 01:00:05 -- common/autotest_common.sh@10 -- # set +x 00:17:31.715 01:00:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:31.975 [2024-11-18 01:00:06.255574] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.975 01:00:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.233 01:00:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.233 "name": "Existed_Raid", 00:17:32.233 "uuid": "acf280aa-3b35-402c-944a-1783f19c721e", 00:17:32.233 "strip_size_kb": 0, 00:17:32.233 "state": "online", 00:17:32.233 "raid_level": "raid1", 00:17:32.233 "superblock": true, 00:17:32.233 "num_base_bdevs": 3, 00:17:32.233 "num_base_bdevs_discovered": 2, 00:17:32.233 "num_base_bdevs_operational": 2, 00:17:32.233 "base_bdevs_list": [ 00:17:32.233 { 00:17:32.233 "name": null, 00:17:32.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.233 "is_configured": false, 00:17:32.233 "data_offset": 2048, 00:17:32.233 "data_size": 63488 00:17:32.233 }, 00:17:32.233 { 00:17:32.233 "name": "BaseBdev2", 00:17:32.233 "uuid": "4ebf02aa-1162-4f1c-90a9-2386c8cb4e05", 00:17:32.233 "is_configured": true, 00:17:32.233 "data_offset": 2048, 00:17:32.233 "data_size": 63488 00:17:32.233 }, 00:17:32.233 { 00:17:32.233 "name": "BaseBdev3", 00:17:32.233 "uuid": "d0fc5e3e-a3a6-45a3-b0a3-db4b2a23599d", 00:17:32.233 "is_configured": true, 00:17:32.233 "data_offset": 2048, 00:17:32.233 "data_size": 63488 00:17:32.233 } 00:17:32.233 ] 00:17:32.233 }' 00:17:32.233 01:00:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.233 01:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:32.799 01:00:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:32.799 01:00:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:32.799 01:00:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.799 01:00:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:33.058 01:00:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:33.058 01:00:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.058 01:00:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:33.315 [2024-11-18 01:00:07.574800] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.315 01:00:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:33.315 01:00:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:33.315 01:00:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:33.315 01:00:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.573 01:00:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:33.573 01:00:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.573 01:00:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:33.831 [2024-11-18 01:00:08.056416] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:33.831 [2024-11-18 01:00:08.056730] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.831 [2024-11-18 01:00:08.056942] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.831 [2024-11-18 01:00:08.078266] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.831 [2024-11-18 01:00:08.078547] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:33.831 01:00:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:33.831 01:00:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:33.831 01:00:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.831 01:00:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.090 01:00:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:34.090 01:00:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:34.090 01:00:08 -- bdev/bdev_raid.sh@287 -- # killprocess 128211 00:17:34.090 01:00:08 -- common/autotest_common.sh@936 -- # '[' -z 128211 ']' 00:17:34.090 01:00:08 -- common/autotest_common.sh@940 -- # kill -0 128211 00:17:34.090 01:00:08 -- common/autotest_common.sh@941 -- # uname 00:17:34.090 01:00:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.090 01:00:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128211 00:17:34.090 01:00:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.090 01:00:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.090 01:00:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128211' 00:17:34.090 killing process with pid 128211 00:17:34.090 01:00:08 -- common/autotest_common.sh@955 -- # kill 128211 00:17:34.090 01:00:08 -- common/autotest_common.sh@960 -- # wait 128211 00:17:34.090 [2024-11-18 01:00:08.333941] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.090 [2024-11-18 01:00:08.334040] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.348 ************************************ 00:17:34.348 END TEST raid_state_function_test_sb 00:17:34.348 ************************************ 00:17:34.348 01:00:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:34.348 00:17:34.348 real 0m12.322s 00:17:34.348 user 0m21.590s 00:17:34.348 sys 0m2.243s 00:17:34.348 01:00:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.348 01:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:34.606 01:00:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:34.606 01:00:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.606 01:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:34.606 ************************************ 00:17:34.606 START TEST raid_superblock_test 00:17:34.606 ************************************ 00:17:34.606 01:00:08 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@357 -- # raid_pid=128596 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:34.606 01:00:08 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128596 /var/tmp/spdk-raid.sock 00:17:34.606 01:00:08 -- common/autotest_common.sh@829 -- # '[' -z 128596 ']' 00:17:34.606 01:00:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:34.606 01:00:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.606 01:00:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:34.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:34.606 01:00:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.606 01:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:34.606 [2024-11-18 01:00:08.877350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:34.606 [2024-11-18 01:00:08.879100] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128596 ] 00:17:34.865 [2024-11-18 01:00:09.039558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.865 [2024-11-18 01:00:09.132361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.865 [2024-11-18 01:00:09.217693] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.433 01:00:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.433 01:00:09 -- common/autotest_common.sh@862 -- # return 0 00:17:35.433 01:00:09 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:35.433 01:00:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:35.433 01:00:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:35.433 01:00:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:35.433 01:00:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:35.434 01:00:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.434 01:00:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.434 01:00:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.434 01:00:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:35.691 malloc1 00:17:35.691 01:00:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.949 [2024-11-18 01:00:10.198304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.949 [2024-11-18 01:00:10.198771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.949 [2024-11-18 01:00:10.198874] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:35.949 [2024-11-18 01:00:10.199038] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.949 [2024-11-18 01:00:10.202395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.949 [2024-11-18 01:00:10.202589] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.949 pt1 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.949 01:00:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:36.207 malloc2 00:17:36.207 01:00:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.465 [2024-11-18 01:00:10.702833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.465 [2024-11-18 01:00:10.703253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.465 [2024-11-18 01:00:10.703340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:36.465 [2024-11-18 01:00:10.703472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.465 [2024-11-18 01:00:10.706424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.465 [2024-11-18 01:00:10.706615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.465 pt2 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.465 01:00:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:36.724 malloc3 00:17:36.724 01:00:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.982 [2024-11-18 01:00:11.178451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.982 [2024-11-18 01:00:11.178834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.982 [2024-11-18 01:00:11.179001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:36.982 [2024-11-18 01:00:11.179157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.982 [2024-11-18 01:00:11.182103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.982 [2024-11-18 01:00:11.182301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:36.982 pt3 00:17:36.982 01:00:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:36.982 01:00:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:36.982 01:00:11 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:36.982 [2024-11-18 01:00:11.374872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.982 [2024-11-18 01:00:11.377768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.982 [2024-11-18 01:00:11.378013] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:36.982 [2024-11-18 01:00:11.378302] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:36.982 [2024-11-18 01:00:11.378426] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:36.982 [2024-11-18 01:00:11.378663] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:36.982 [2024-11-18 01:00:11.379286] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:36.982 [2024-11-18 01:00:11.379416] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:17:36.982 [2024-11-18 01:00:11.379747] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.241 01:00:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.499 01:00:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.499 "name": "raid_bdev1", 00:17:37.499 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:37.499 "strip_size_kb": 0, 00:17:37.499 "state": "online", 00:17:37.499 "raid_level": "raid1", 00:17:37.499 "superblock": true, 00:17:37.499 "num_base_bdevs": 3, 00:17:37.499 "num_base_bdevs_discovered": 3, 00:17:37.499 "num_base_bdevs_operational": 3, 00:17:37.499 "base_bdevs_list": [ 00:17:37.499 { 00:17:37.499 "name": "pt1", 00:17:37.499 "uuid": "7dfeb36d-b36f-5e40-9fec-edb5406271a0", 00:17:37.499 "is_configured": true, 00:17:37.499 "data_offset": 2048, 00:17:37.499 "data_size": 63488 00:17:37.499 }, 00:17:37.500 { 00:17:37.500 "name": "pt2", 00:17:37.500 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:37.500 "is_configured": true, 00:17:37.500 "data_offset": 2048, 00:17:37.500 "data_size": 63488 00:17:37.500 }, 00:17:37.500 { 00:17:37.500 "name": "pt3", 00:17:37.500 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:37.500 "is_configured": true, 00:17:37.500 "data_offset": 2048, 00:17:37.500 "data_size": 63488 00:17:37.500 } 00:17:37.500 ] 00:17:37.500 }' 00:17:37.500 01:00:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.500 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:38.067 01:00:12 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:38.068 01:00:12 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:38.068 [2024-11-18 01:00:12.400139] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.068 01:00:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=18d17888-03d1-476e-ba7f-014245228e86 00:17:38.068 01:00:12 -- bdev/bdev_raid.sh@380 -- # '[' -z 18d17888-03d1-476e-ba7f-014245228e86 ']' 00:17:38.068 01:00:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:38.326 [2024-11-18 01:00:12.587919] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.326 [2024-11-18 01:00:12.588214] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.326 [2024-11-18 01:00:12.588464] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.326 [2024-11-18 01:00:12.588610] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.326 [2024-11-18 01:00:12.588794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:17:38.326 01:00:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.326 01:00:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:38.585 01:00:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:38.585 01:00:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:38.585 01:00:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.585 01:00:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:38.844 01:00:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.844 01:00:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:38.844 01:00:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.844 01:00:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:39.102 01:00:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:39.102 01:00:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:39.360 01:00:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:39.360 01:00:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:39.360 01:00:13 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.360 01:00:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:39.360 01:00:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.360 01:00:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.360 01:00:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.360 01:00:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.360 01:00:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.360 01:00:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.360 01:00:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.360 01:00:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:39.360 01:00:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:39.618 [2024-11-18 01:00:13.872129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:39.618 [2024-11-18 01:00:13.874830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:39.618 [2024-11-18 01:00:13.875037] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:39.618 [2024-11-18 01:00:13.875126] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:39.618 [2024-11-18 01:00:13.875314] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:39.618 [2024-11-18 01:00:13.875380] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:39.618 [2024-11-18 01:00:13.875507] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.618 [2024-11-18 01:00:13.875616] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:17:39.618 request: 00:17:39.618 { 00:17:39.618 "name": "raid_bdev1", 00:17:39.618 "raid_level": "raid1", 00:17:39.618 "base_bdevs": [ 00:17:39.618 "malloc1", 00:17:39.618 "malloc2", 00:17:39.618 "malloc3" 00:17:39.618 ], 00:17:39.618 "superblock": false, 00:17:39.618 "method": "bdev_raid_create", 00:17:39.618 "req_id": 1 00:17:39.618 } 00:17:39.618 Got JSON-RPC error response 00:17:39.618 response: 00:17:39.618 { 00:17:39.618 "code": -17, 00:17:39.618 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:39.618 } 00:17:39.618 01:00:13 -- common/autotest_common.sh@653 -- # es=1 00:17:39.618 01:00:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.618 01:00:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.618 01:00:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.618 01:00:13 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.618 01:00:13 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:39.876 01:00:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:39.876 01:00:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:39.877 01:00:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.135 [2024-11-18 01:00:14.336210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.135 [2024-11-18 01:00:14.336573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.135 [2024-11-18 01:00:14.336650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:40.135 [2024-11-18 01:00:14.336786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.135 [2024-11-18 01:00:14.339588] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.135 [2024-11-18 01:00:14.339769] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.135 [2024-11-18 01:00:14.339974] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:40.135 [2024-11-18 01:00:14.340110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.135 pt1 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.135 01:00:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.394 01:00:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.394 "name": "raid_bdev1", 00:17:40.394 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:40.394 "strip_size_kb": 0, 00:17:40.394 "state": "configuring", 00:17:40.394 "raid_level": "raid1", 00:17:40.394 "superblock": true, 00:17:40.394 "num_base_bdevs": 3, 00:17:40.394 "num_base_bdevs_discovered": 1, 00:17:40.394 "num_base_bdevs_operational": 3, 00:17:40.394 "base_bdevs_list": [ 00:17:40.394 { 00:17:40.394 "name": "pt1", 00:17:40.394 "uuid": "7dfeb36d-b36f-5e40-9fec-edb5406271a0", 00:17:40.394 "is_configured": true, 00:17:40.394 "data_offset": 2048, 00:17:40.394 "data_size": 63488 00:17:40.394 }, 00:17:40.394 { 00:17:40.395 "name": null, 00:17:40.395 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:40.395 "is_configured": false, 00:17:40.395 "data_offset": 2048, 00:17:40.395 "data_size": 63488 00:17:40.395 }, 00:17:40.395 { 00:17:40.395 "name": null, 00:17:40.395 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:40.395 "is_configured": false, 00:17:40.395 "data_offset": 2048, 00:17:40.395 "data_size": 63488 00:17:40.395 } 00:17:40.395 ] 00:17:40.395 }' 00:17:40.395 01:00:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.395 01:00:14 -- common/autotest_common.sh@10 -- # set +x 00:17:40.963 01:00:15 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:40.963 01:00:15 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.222 [2024-11-18 01:00:15.472595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.222 [2024-11-18 01:00:15.473009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.222 [2024-11-18 01:00:15.473097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:41.222 [2024-11-18 01:00:15.473211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.222 [2024-11-18 01:00:15.473708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.222 [2024-11-18 01:00:15.473777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.222 [2024-11-18 01:00:15.473971] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:41.222 [2024-11-18 01:00:15.474023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.222 pt2 00:17:41.222 01:00:15 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:41.481 [2024-11-18 01:00:15.744710] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.481 01:00:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.740 01:00:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.740 "name": "raid_bdev1", 00:17:41.740 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:41.740 "strip_size_kb": 0, 00:17:41.740 "state": "configuring", 00:17:41.740 "raid_level": "raid1", 00:17:41.740 "superblock": true, 00:17:41.740 "num_base_bdevs": 3, 00:17:41.740 "num_base_bdevs_discovered": 1, 00:17:41.740 "num_base_bdevs_operational": 3, 00:17:41.740 "base_bdevs_list": [ 00:17:41.740 { 00:17:41.740 "name": "pt1", 00:17:41.740 "uuid": "7dfeb36d-b36f-5e40-9fec-edb5406271a0", 00:17:41.740 "is_configured": true, 00:17:41.740 "data_offset": 2048, 00:17:41.740 "data_size": 63488 00:17:41.740 }, 00:17:41.740 { 00:17:41.740 "name": null, 00:17:41.740 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:41.740 "is_configured": false, 00:17:41.740 "data_offset": 2048, 00:17:41.740 "data_size": 63488 00:17:41.740 }, 00:17:41.740 { 00:17:41.740 "name": null, 00:17:41.740 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:41.740 "is_configured": false, 00:17:41.740 "data_offset": 2048, 00:17:41.740 "data_size": 63488 00:17:41.740 } 00:17:41.740 ] 00:17:41.740 }' 00:17:41.740 01:00:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.740 01:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:42.307 01:00:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:42.307 01:00:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:42.307 01:00:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.566 [2024-11-18 01:00:16.868857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.566 [2024-11-18 01:00:16.869189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.566 [2024-11-18 01:00:16.869267] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:42.566 [2024-11-18 01:00:16.869374] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.566 [2024-11-18 01:00:16.869944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.566 [2024-11-18 01:00:16.870085] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.566 [2024-11-18 01:00:16.870244] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:42.566 [2024-11-18 01:00:16.870466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.566 pt2 00:17:42.566 01:00:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:42.566 01:00:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:42.566 01:00:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:42.824 [2024-11-18 01:00:17.128963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:42.824 [2024-11-18 01:00:17.129308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.824 [2024-11-18 01:00:17.129401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:42.824 [2024-11-18 01:00:17.129511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.824 [2024-11-18 01:00:17.130119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.824 [2024-11-18 01:00:17.130332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:42.824 [2024-11-18 01:00:17.130552] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:42.824 [2024-11-18 01:00:17.130663] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:42.824 [2024-11-18 01:00:17.130934] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:42.824 [2024-11-18 01:00:17.131046] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:42.824 [2024-11-18 01:00:17.131253] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:42.824 [2024-11-18 01:00:17.131738] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:42.824 [2024-11-18 01:00:17.131855] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:42.824 [2024-11-18 01:00:17.132062] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.824 pt3 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.824 01:00:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.083 01:00:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.083 "name": "raid_bdev1", 00:17:43.083 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:43.083 "strip_size_kb": 0, 00:17:43.083 "state": "online", 00:17:43.083 "raid_level": "raid1", 00:17:43.083 "superblock": true, 00:17:43.083 "num_base_bdevs": 3, 00:17:43.083 "num_base_bdevs_discovered": 3, 00:17:43.083 "num_base_bdevs_operational": 3, 00:17:43.083 "base_bdevs_list": [ 00:17:43.083 { 00:17:43.083 "name": "pt1", 00:17:43.083 "uuid": "7dfeb36d-b36f-5e40-9fec-edb5406271a0", 00:17:43.083 "is_configured": true, 00:17:43.083 "data_offset": 2048, 00:17:43.083 "data_size": 63488 00:17:43.083 }, 00:17:43.083 { 00:17:43.083 "name": "pt2", 00:17:43.083 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:43.083 "is_configured": true, 00:17:43.083 "data_offset": 2048, 00:17:43.083 "data_size": 63488 00:17:43.083 }, 00:17:43.083 { 00:17:43.083 "name": "pt3", 00:17:43.083 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:43.083 "is_configured": true, 00:17:43.083 "data_offset": 2048, 00:17:43.083 "data_size": 63488 00:17:43.083 } 00:17:43.083 ] 00:17:43.083 }' 00:17:43.083 01:00:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.083 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:43.652 01:00:17 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.652 01:00:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:43.912 [2024-11-18 01:00:18.229408] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.912 01:00:18 -- bdev/bdev_raid.sh@430 -- # '[' 18d17888-03d1-476e-ba7f-014245228e86 '!=' 18d17888-03d1-476e-ba7f-014245228e86 ']' 00:17:43.912 01:00:18 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:43.912 01:00:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:43.912 01:00:18 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:43.912 01:00:18 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:44.172 [2024-11-18 01:00:18.509297] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.172 01:00:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.432 01:00:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.432 "name": "raid_bdev1", 00:17:44.432 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:44.432 "strip_size_kb": 0, 00:17:44.432 "state": "online", 00:17:44.432 "raid_level": "raid1", 00:17:44.432 "superblock": true, 00:17:44.432 "num_base_bdevs": 3, 00:17:44.432 "num_base_bdevs_discovered": 2, 00:17:44.432 "num_base_bdevs_operational": 2, 00:17:44.432 "base_bdevs_list": [ 00:17:44.432 { 00:17:44.432 "name": null, 00:17:44.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.432 "is_configured": false, 00:17:44.432 "data_offset": 2048, 00:17:44.432 "data_size": 63488 00:17:44.432 }, 00:17:44.432 { 00:17:44.432 "name": "pt2", 00:17:44.432 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:44.432 "is_configured": true, 00:17:44.432 "data_offset": 2048, 00:17:44.432 "data_size": 63488 00:17:44.432 }, 00:17:44.432 { 00:17:44.432 "name": "pt3", 00:17:44.432 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:44.432 "is_configured": true, 00:17:44.432 "data_offset": 2048, 00:17:44.432 "data_size": 63488 00:17:44.432 } 00:17:44.432 ] 00:17:44.432 }' 00:17:44.432 01:00:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.432 01:00:18 -- common/autotest_common.sh@10 -- # set +x 00:17:45.003 01:00:19 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:45.261 [2024-11-18 01:00:19.605424] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.261 [2024-11-18 01:00:19.605768] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.261 [2024-11-18 01:00:19.605997] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.261 [2024-11-18 01:00:19.606184] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.261 [2024-11-18 01:00:19.606279] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:45.261 01:00:19 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.261 01:00:19 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:45.520 01:00:19 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:45.520 01:00:19 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:45.520 01:00:19 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:45.520 01:00:19 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:45.520 01:00:19 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:45.779 01:00:20 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:45.779 01:00:20 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:45.779 01:00:20 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:46.038 01:00:20 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:46.038 01:00:20 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:46.038 01:00:20 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:46.038 01:00:20 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:46.038 01:00:20 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.297 [2024-11-18 01:00:20.441610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.297 [2024-11-18 01:00:20.441963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.297 [2024-11-18 01:00:20.442048] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:46.297 [2024-11-18 01:00:20.442200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.297 [2024-11-18 01:00:20.445369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.297 [2024-11-18 01:00:20.445578] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.297 [2024-11-18 01:00:20.445797] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:46.297 [2024-11-18 01:00:20.445967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.297 pt2 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.297 01:00:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.297 "name": "raid_bdev1", 00:17:46.297 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:46.297 "strip_size_kb": 0, 00:17:46.297 "state": "configuring", 00:17:46.297 "raid_level": "raid1", 00:17:46.297 "superblock": true, 00:17:46.297 "num_base_bdevs": 3, 00:17:46.297 "num_base_bdevs_discovered": 1, 00:17:46.297 "num_base_bdevs_operational": 2, 00:17:46.297 "base_bdevs_list": [ 00:17:46.297 { 00:17:46.297 "name": null, 00:17:46.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.297 "is_configured": false, 00:17:46.297 "data_offset": 2048, 00:17:46.297 "data_size": 63488 00:17:46.297 }, 00:17:46.297 { 00:17:46.297 "name": "pt2", 00:17:46.297 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:46.297 "is_configured": true, 00:17:46.297 "data_offset": 2048, 00:17:46.297 "data_size": 63488 00:17:46.297 }, 00:17:46.298 { 00:17:46.298 "name": null, 00:17:46.298 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:46.298 "is_configured": false, 00:17:46.298 "data_offset": 2048, 00:17:46.298 "data_size": 63488 00:17:46.298 } 00:17:46.298 ] 00:17:46.298 }' 00:17:46.298 01:00:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.298 01:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:46.866 01:00:21 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:46.866 01:00:21 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:46.866 01:00:21 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:46.866 01:00:21 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:47.125 [2024-11-18 01:00:21.486736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:47.125 [2024-11-18 01:00:21.487034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.125 [2024-11-18 01:00:21.487119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:47.125 [2024-11-18 01:00:21.487235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.125 [2024-11-18 01:00:21.487787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.125 [2024-11-18 01:00:21.487922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:47.125 [2024-11-18 01:00:21.488087] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:47.125 [2024-11-18 01:00:21.488139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:47.125 [2024-11-18 01:00:21.488288] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:47.125 [2024-11-18 01:00:21.488320] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:47.125 [2024-11-18 01:00:21.488412] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:17:47.125 [2024-11-18 01:00:21.488757] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:47.125 [2024-11-18 01:00:21.488795] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:47.125 [2024-11-18 01:00:21.488918] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.125 pt3 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.125 01:00:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.383 01:00:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.383 "name": "raid_bdev1", 00:17:47.383 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:47.383 "strip_size_kb": 0, 00:17:47.383 "state": "online", 00:17:47.383 "raid_level": "raid1", 00:17:47.383 "superblock": true, 00:17:47.383 "num_base_bdevs": 3, 00:17:47.383 "num_base_bdevs_discovered": 2, 00:17:47.383 "num_base_bdevs_operational": 2, 00:17:47.383 "base_bdevs_list": [ 00:17:47.383 { 00:17:47.383 "name": null, 00:17:47.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.383 "is_configured": false, 00:17:47.383 "data_offset": 2048, 00:17:47.383 "data_size": 63488 00:17:47.383 }, 00:17:47.383 { 00:17:47.383 "name": "pt2", 00:17:47.383 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:47.383 "is_configured": true, 00:17:47.383 "data_offset": 2048, 00:17:47.383 "data_size": 63488 00:17:47.383 }, 00:17:47.383 { 00:17:47.383 "name": "pt3", 00:17:47.383 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:47.383 "is_configured": true, 00:17:47.383 "data_offset": 2048, 00:17:47.383 "data_size": 63488 00:17:47.383 } 00:17:47.383 ] 00:17:47.383 }' 00:17:47.383 01:00:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.383 01:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:47.951 01:00:22 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:47.951 01:00:22 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:48.210 [2024-11-18 01:00:22.610873] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.210 [2024-11-18 01:00:22.611092] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.210 [2024-11-18 01:00:22.611289] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.210 [2024-11-18 01:00:22.611397] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.468 [2024-11-18 01:00:22.611627] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:48.468 01:00:22 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.468 01:00:22 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:48.727 01:00:22 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:48.727 01:00:22 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:48.727 01:00:22 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.727 [2024-11-18 01:00:23.058962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.727 [2024-11-18 01:00:23.059337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.727 [2024-11-18 01:00:23.059419] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:48.727 [2024-11-18 01:00:23.059548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.727 [2024-11-18 01:00:23.062341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.727 [2024-11-18 01:00:23.062512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.727 [2024-11-18 01:00:23.062714] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:48.727 [2024-11-18 01:00:23.062850] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.727 pt1 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.727 01:00:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.985 01:00:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.985 "name": "raid_bdev1", 00:17:48.985 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:48.985 "strip_size_kb": 0, 00:17:48.985 "state": "configuring", 00:17:48.985 "raid_level": "raid1", 00:17:48.985 "superblock": true, 00:17:48.985 "num_base_bdevs": 3, 00:17:48.985 "num_base_bdevs_discovered": 1, 00:17:48.985 "num_base_bdevs_operational": 3, 00:17:48.985 "base_bdevs_list": [ 00:17:48.985 { 00:17:48.985 "name": "pt1", 00:17:48.985 "uuid": "7dfeb36d-b36f-5e40-9fec-edb5406271a0", 00:17:48.985 "is_configured": true, 00:17:48.985 "data_offset": 2048, 00:17:48.985 "data_size": 63488 00:17:48.985 }, 00:17:48.985 { 00:17:48.985 "name": null, 00:17:48.985 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:48.985 "is_configured": false, 00:17:48.985 "data_offset": 2048, 00:17:48.985 "data_size": 63488 00:17:48.985 }, 00:17:48.985 { 00:17:48.985 "name": null, 00:17:48.985 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:48.985 "is_configured": false, 00:17:48.985 "data_offset": 2048, 00:17:48.985 "data_size": 63488 00:17:48.985 } 00:17:48.985 ] 00:17:48.985 }' 00:17:48.985 01:00:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.985 01:00:23 -- common/autotest_common.sh@10 -- # set +x 00:17:49.552 01:00:23 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:49.552 01:00:23 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:49.552 01:00:23 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:49.810 01:00:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:49.810 01:00:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:49.810 01:00:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:50.069 01:00:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:50.069 01:00:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:50.069 01:00:24 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:50.069 01:00:24 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:50.328 [2024-11-18 01:00:24.574671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:50.328 [2024-11-18 01:00:24.575108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.328 [2024-11-18 01:00:24.575191] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:50.328 [2024-11-18 01:00:24.575403] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.328 [2024-11-18 01:00:24.575992] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.328 [2024-11-18 01:00:24.576078] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:50.328 [2024-11-18 01:00:24.576274] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:50.328 [2024-11-18 01:00:24.576380] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:50.328 [2024-11-18 01:00:24.576468] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.328 [2024-11-18 01:00:24.576544] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:17:50.328 [2024-11-18 01:00:24.576729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:50.328 pt3 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.328 01:00:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.587 01:00:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.587 "name": "raid_bdev1", 00:17:50.587 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:50.587 "strip_size_kb": 0, 00:17:50.587 "state": "configuring", 00:17:50.587 "raid_level": "raid1", 00:17:50.587 "superblock": true, 00:17:50.587 "num_base_bdevs": 3, 00:17:50.587 "num_base_bdevs_discovered": 1, 00:17:50.587 "num_base_bdevs_operational": 2, 00:17:50.587 "base_bdevs_list": [ 00:17:50.587 { 00:17:50.587 "name": null, 00:17:50.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.587 "is_configured": false, 00:17:50.587 "data_offset": 2048, 00:17:50.587 "data_size": 63488 00:17:50.587 }, 00:17:50.587 { 00:17:50.587 "name": null, 00:17:50.587 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:50.587 "is_configured": false, 00:17:50.587 "data_offset": 2048, 00:17:50.587 "data_size": 63488 00:17:50.587 }, 00:17:50.587 { 00:17:50.587 "name": "pt3", 00:17:50.587 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:50.587 "is_configured": true, 00:17:50.587 "data_offset": 2048, 00:17:50.587 "data_size": 63488 00:17:50.587 } 00:17:50.587 ] 00:17:50.587 }' 00:17:50.587 01:00:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.587 01:00:24 -- common/autotest_common.sh@10 -- # set +x 00:17:51.155 01:00:25 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:51.155 01:00:25 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:51.155 01:00:25 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.155 [2024-11-18 01:00:25.546892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.155 [2024-11-18 01:00:25.547333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.155 [2024-11-18 01:00:25.547415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:51.155 [2024-11-18 01:00:25.547523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.155 [2024-11-18 01:00:25.548173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.155 [2024-11-18 01:00:25.548325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.155 [2024-11-18 01:00:25.548474] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:51.155 [2024-11-18 01:00:25.548533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.155 [2024-11-18 01:00:25.548688] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:17:51.155 [2024-11-18 01:00:25.548723] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:51.155 [2024-11-18 01:00:25.548821] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:17:51.155 [2024-11-18 01:00:25.549196] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:17:51.155 [2024-11-18 01:00:25.549240] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:17:51.155 [2024-11-18 01:00:25.549369] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.155 pt2 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.420 "name": "raid_bdev1", 00:17:51.420 "uuid": "18d17888-03d1-476e-ba7f-014245228e86", 00:17:51.420 "strip_size_kb": 0, 00:17:51.420 "state": "online", 00:17:51.420 "raid_level": "raid1", 00:17:51.420 "superblock": true, 00:17:51.420 "num_base_bdevs": 3, 00:17:51.420 "num_base_bdevs_discovered": 2, 00:17:51.420 "num_base_bdevs_operational": 2, 00:17:51.420 "base_bdevs_list": [ 00:17:51.420 { 00:17:51.420 "name": null, 00:17:51.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.420 "is_configured": false, 00:17:51.420 "data_offset": 2048, 00:17:51.420 "data_size": 63488 00:17:51.420 }, 00:17:51.420 { 00:17:51.420 "name": "pt2", 00:17:51.420 "uuid": "44e3c87a-8f16-5abb-891a-2a0eee249178", 00:17:51.420 "is_configured": true, 00:17:51.420 "data_offset": 2048, 00:17:51.420 "data_size": 63488 00:17:51.420 }, 00:17:51.420 { 00:17:51.420 "name": "pt3", 00:17:51.420 "uuid": "915b2ad7-0323-54c6-9c15-6e9e6cdf4d39", 00:17:51.420 "is_configured": true, 00:17:51.420 "data_offset": 2048, 00:17:51.420 "data_size": 63488 00:17:51.420 } 00:17:51.420 ] 00:17:51.420 }' 00:17:51.420 01:00:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.420 01:00:25 -- common/autotest_common.sh@10 -- # set +x 00:17:51.999 01:00:26 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:51.999 01:00:26 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:52.258 [2024-11-18 01:00:26.607274] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.258 01:00:26 -- bdev/bdev_raid.sh@506 -- # '[' 18d17888-03d1-476e-ba7f-014245228e86 '!=' 18d17888-03d1-476e-ba7f-014245228e86 ']' 00:17:52.258 01:00:26 -- bdev/bdev_raid.sh@511 -- # killprocess 128596 00:17:52.258 01:00:26 -- common/autotest_common.sh@936 -- # '[' -z 128596 ']' 00:17:52.258 01:00:26 -- common/autotest_common.sh@940 -- # kill -0 128596 00:17:52.258 01:00:26 -- common/autotest_common.sh@941 -- # uname 00:17:52.258 01:00:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.258 01:00:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128596 00:17:52.517 killing process with pid 128596 00:17:52.517 01:00:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:52.517 01:00:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:52.517 01:00:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128596' 00:17:52.517 01:00:26 -- common/autotest_common.sh@955 -- # kill 128596 00:17:52.517 01:00:26 -- common/autotest_common.sh@960 -- # wait 128596 00:17:52.517 [2024-11-18 01:00:26.665251] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.517 [2024-11-18 01:00:26.665349] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.517 [2024-11-18 01:00:26.665416] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.517 [2024-11-18 01:00:26.665425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:17:52.517 [2024-11-18 01:00:26.728499] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.776 01:00:27 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:52.776 00:17:52.776 real 0m18.318s 00:17:52.776 user 0m33.145s 00:17:52.776 sys 0m3.141s 00:17:52.776 01:00:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.776 01:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:52.776 ************************************ 00:17:52.776 END TEST raid_superblock_test 00:17:52.776 ************************************ 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:53.035 01:00:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:53.035 01:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.035 01:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:53.035 ************************************ 00:17:53.035 START TEST raid_state_function_test 00:17:53.035 ************************************ 00:17:53.035 01:00:27 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:53.035 01:00:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=129193 00:17:53.036 01:00:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:53.036 Process raid pid: 129193 00:17:53.036 01:00:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129193' 00:17:53.036 01:00:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129193 /var/tmp/spdk-raid.sock 00:17:53.036 01:00:27 -- common/autotest_common.sh@829 -- # '[' -z 129193 ']' 00:17:53.036 01:00:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:53.036 01:00:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.036 01:00:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:53.036 01:00:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.036 01:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:53.036 [2024-11-18 01:00:27.276901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:53.036 [2024-11-18 01:00:27.277302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.036 [2024-11-18 01:00:27.424924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.295 [2024-11-18 01:00:27.518459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.295 [2024-11-18 01:00:27.603657] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.862 01:00:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.862 01:00:28 -- common/autotest_common.sh@862 -- # return 0 00:17:53.862 01:00:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:54.121 [2024-11-18 01:00:28.406057] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.121 [2024-11-18 01:00:28.406440] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.121 [2024-11-18 01:00:28.406533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.121 [2024-11-18 01:00:28.406590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.121 [2024-11-18 01:00:28.406782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.121 [2024-11-18 01:00:28.406867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.121 [2024-11-18 01:00:28.406897] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:54.121 [2024-11-18 01:00:28.406948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.121 01:00:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.380 01:00:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.380 "name": "Existed_Raid", 00:17:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.380 "strip_size_kb": 64, 00:17:54.380 "state": "configuring", 00:17:54.380 "raid_level": "raid0", 00:17:54.380 "superblock": false, 00:17:54.380 "num_base_bdevs": 4, 00:17:54.380 "num_base_bdevs_discovered": 0, 00:17:54.380 "num_base_bdevs_operational": 4, 00:17:54.380 "base_bdevs_list": [ 00:17:54.380 { 00:17:54.380 "name": "BaseBdev1", 00:17:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.380 "is_configured": false, 00:17:54.380 "data_offset": 0, 00:17:54.380 "data_size": 0 00:17:54.380 }, 00:17:54.380 { 00:17:54.380 "name": "BaseBdev2", 00:17:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.380 "is_configured": false, 00:17:54.380 "data_offset": 0, 00:17:54.380 "data_size": 0 00:17:54.380 }, 00:17:54.380 { 00:17:54.380 "name": "BaseBdev3", 00:17:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.380 "is_configured": false, 00:17:54.380 "data_offset": 0, 00:17:54.380 "data_size": 0 00:17:54.380 }, 00:17:54.380 { 00:17:54.380 "name": "BaseBdev4", 00:17:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.380 "is_configured": false, 00:17:54.380 "data_offset": 0, 00:17:54.380 "data_size": 0 00:17:54.380 } 00:17:54.380 ] 00:17:54.380 }' 00:17:54.380 01:00:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.380 01:00:28 -- common/autotest_common.sh@10 -- # set +x 00:17:54.956 01:00:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.956 [2024-11-18 01:00:29.330056] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.956 [2024-11-18 01:00:29.330406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:54.956 01:00:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:55.214 [2024-11-18 01:00:29.526165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.214 [2024-11-18 01:00:29.526486] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.214 [2024-11-18 01:00:29.526569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.214 [2024-11-18 01:00:29.526628] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.214 [2024-11-18 01:00:29.526655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.214 [2024-11-18 01:00:29.526693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.214 [2024-11-18 01:00:29.526717] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.214 [2024-11-18 01:00:29.526761] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.214 01:00:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.474 [2024-11-18 01:00:29.742336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.474 BaseBdev1 00:17:55.474 01:00:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:55.474 01:00:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:55.474 01:00:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:55.474 01:00:29 -- common/autotest_common.sh@899 -- # local i 00:17:55.474 01:00:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:55.474 01:00:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:55.474 01:00:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.733 01:00:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.992 [ 00:17:55.992 { 00:17:55.992 "name": "BaseBdev1", 00:17:55.992 "aliases": [ 00:17:55.992 "dc74cafe-dcd5-4cba-9a20-62501edaff66" 00:17:55.992 ], 00:17:55.992 "product_name": "Malloc disk", 00:17:55.992 "block_size": 512, 00:17:55.992 "num_blocks": 65536, 00:17:55.992 "uuid": "dc74cafe-dcd5-4cba-9a20-62501edaff66", 00:17:55.992 "assigned_rate_limits": { 00:17:55.992 "rw_ios_per_sec": 0, 00:17:55.992 "rw_mbytes_per_sec": 0, 00:17:55.992 "r_mbytes_per_sec": 0, 00:17:55.992 "w_mbytes_per_sec": 0 00:17:55.992 }, 00:17:55.992 "claimed": true, 00:17:55.992 "claim_type": "exclusive_write", 00:17:55.992 "zoned": false, 00:17:55.992 "supported_io_types": { 00:17:55.992 "read": true, 00:17:55.992 "write": true, 00:17:55.992 "unmap": true, 00:17:55.992 "write_zeroes": true, 00:17:55.992 "flush": true, 00:17:55.992 "reset": true, 00:17:55.992 "compare": false, 00:17:55.992 "compare_and_write": false, 00:17:55.992 "abort": true, 00:17:55.992 "nvme_admin": false, 00:17:55.992 "nvme_io": false 00:17:55.992 }, 00:17:55.992 "memory_domains": [ 00:17:55.992 { 00:17:55.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.992 "dma_device_type": 2 00:17:55.992 } 00:17:55.992 ], 00:17:55.992 "driver_specific": {} 00:17:55.992 } 00:17:55.992 ] 00:17:55.992 01:00:30 -- common/autotest_common.sh@905 -- # return 0 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.992 01:00:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.252 01:00:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.252 "name": "Existed_Raid", 00:17:56.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.252 "strip_size_kb": 64, 00:17:56.252 "state": "configuring", 00:17:56.252 "raid_level": "raid0", 00:17:56.252 "superblock": false, 00:17:56.252 "num_base_bdevs": 4, 00:17:56.252 "num_base_bdevs_discovered": 1, 00:17:56.252 "num_base_bdevs_operational": 4, 00:17:56.252 "base_bdevs_list": [ 00:17:56.252 { 00:17:56.252 "name": "BaseBdev1", 00:17:56.252 "uuid": "dc74cafe-dcd5-4cba-9a20-62501edaff66", 00:17:56.252 "is_configured": true, 00:17:56.252 "data_offset": 0, 00:17:56.252 "data_size": 65536 00:17:56.252 }, 00:17:56.252 { 00:17:56.252 "name": "BaseBdev2", 00:17:56.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.252 "is_configured": false, 00:17:56.252 "data_offset": 0, 00:17:56.252 "data_size": 0 00:17:56.252 }, 00:17:56.252 { 00:17:56.252 "name": "BaseBdev3", 00:17:56.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.252 "is_configured": false, 00:17:56.252 "data_offset": 0, 00:17:56.252 "data_size": 0 00:17:56.252 }, 00:17:56.252 { 00:17:56.252 "name": "BaseBdev4", 00:17:56.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.252 "is_configured": false, 00:17:56.252 "data_offset": 0, 00:17:56.252 "data_size": 0 00:17:56.252 } 00:17:56.252 ] 00:17:56.252 }' 00:17:56.252 01:00:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.252 01:00:30 -- common/autotest_common.sh@10 -- # set +x 00:17:56.821 01:00:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:56.821 [2024-11-18 01:00:31.190984] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.821 [2024-11-18 01:00:31.191334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:56.821 01:00:31 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:56.821 01:00:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:57.080 [2024-11-18 01:00:31.383131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.080 [2024-11-18 01:00:31.385848] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.080 [2024-11-18 01:00:31.386084] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.080 [2024-11-18 01:00:31.386197] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.080 [2024-11-18 01:00:31.386261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.080 [2024-11-18 01:00:31.386289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:57.080 [2024-11-18 01:00:31.386378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.080 01:00:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.339 01:00:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.339 "name": "Existed_Raid", 00:17:57.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.339 "strip_size_kb": 64, 00:17:57.339 "state": "configuring", 00:17:57.339 "raid_level": "raid0", 00:17:57.339 "superblock": false, 00:17:57.339 "num_base_bdevs": 4, 00:17:57.339 "num_base_bdevs_discovered": 1, 00:17:57.339 "num_base_bdevs_operational": 4, 00:17:57.339 "base_bdevs_list": [ 00:17:57.339 { 00:17:57.339 "name": "BaseBdev1", 00:17:57.339 "uuid": "dc74cafe-dcd5-4cba-9a20-62501edaff66", 00:17:57.339 "is_configured": true, 00:17:57.339 "data_offset": 0, 00:17:57.339 "data_size": 65536 00:17:57.339 }, 00:17:57.339 { 00:17:57.339 "name": "BaseBdev2", 00:17:57.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.339 "is_configured": false, 00:17:57.339 "data_offset": 0, 00:17:57.339 "data_size": 0 00:17:57.339 }, 00:17:57.339 { 00:17:57.339 "name": "BaseBdev3", 00:17:57.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.339 "is_configured": false, 00:17:57.339 "data_offset": 0, 00:17:57.339 "data_size": 0 00:17:57.339 }, 00:17:57.339 { 00:17:57.339 "name": "BaseBdev4", 00:17:57.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.339 "is_configured": false, 00:17:57.339 "data_offset": 0, 00:17:57.339 "data_size": 0 00:17:57.339 } 00:17:57.339 ] 00:17:57.339 }' 00:17:57.339 01:00:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.339 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:17:57.907 01:00:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:58.166 [2024-11-18 01:00:32.400001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.166 BaseBdev2 00:17:58.166 01:00:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:58.166 01:00:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:58.166 01:00:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:58.166 01:00:32 -- common/autotest_common.sh@899 -- # local i 00:17:58.166 01:00:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:58.166 01:00:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:58.166 01:00:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.425 01:00:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:58.425 [ 00:17:58.425 { 00:17:58.425 "name": "BaseBdev2", 00:17:58.425 "aliases": [ 00:17:58.425 "c5960f73-a064-4150-83a6-eb1f801e1df6" 00:17:58.425 ], 00:17:58.425 "product_name": "Malloc disk", 00:17:58.425 "block_size": 512, 00:17:58.425 "num_blocks": 65536, 00:17:58.425 "uuid": "c5960f73-a064-4150-83a6-eb1f801e1df6", 00:17:58.425 "assigned_rate_limits": { 00:17:58.425 "rw_ios_per_sec": 0, 00:17:58.425 "rw_mbytes_per_sec": 0, 00:17:58.425 "r_mbytes_per_sec": 0, 00:17:58.425 "w_mbytes_per_sec": 0 00:17:58.425 }, 00:17:58.425 "claimed": true, 00:17:58.425 "claim_type": "exclusive_write", 00:17:58.425 "zoned": false, 00:17:58.425 "supported_io_types": { 00:17:58.425 "read": true, 00:17:58.425 "write": true, 00:17:58.425 "unmap": true, 00:17:58.425 "write_zeroes": true, 00:17:58.425 "flush": true, 00:17:58.425 "reset": true, 00:17:58.425 "compare": false, 00:17:58.425 "compare_and_write": false, 00:17:58.425 "abort": true, 00:17:58.425 "nvme_admin": false, 00:17:58.425 "nvme_io": false 00:17:58.425 }, 00:17:58.425 "memory_domains": [ 00:17:58.425 { 00:17:58.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.425 "dma_device_type": 2 00:17:58.425 } 00:17:58.425 ], 00:17:58.425 "driver_specific": {} 00:17:58.425 } 00:17:58.425 ] 00:17:58.425 01:00:32 -- common/autotest_common.sh@905 -- # return 0 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.425 01:00:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.685 01:00:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.685 "name": "Existed_Raid", 00:17:58.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.685 "strip_size_kb": 64, 00:17:58.685 "state": "configuring", 00:17:58.685 "raid_level": "raid0", 00:17:58.685 "superblock": false, 00:17:58.685 "num_base_bdevs": 4, 00:17:58.685 "num_base_bdevs_discovered": 2, 00:17:58.685 "num_base_bdevs_operational": 4, 00:17:58.685 "base_bdevs_list": [ 00:17:58.685 { 00:17:58.685 "name": "BaseBdev1", 00:17:58.685 "uuid": "dc74cafe-dcd5-4cba-9a20-62501edaff66", 00:17:58.685 "is_configured": true, 00:17:58.685 "data_offset": 0, 00:17:58.685 "data_size": 65536 00:17:58.685 }, 00:17:58.685 { 00:17:58.685 "name": "BaseBdev2", 00:17:58.685 "uuid": "c5960f73-a064-4150-83a6-eb1f801e1df6", 00:17:58.685 "is_configured": true, 00:17:58.685 "data_offset": 0, 00:17:58.685 "data_size": 65536 00:17:58.685 }, 00:17:58.685 { 00:17:58.685 "name": "BaseBdev3", 00:17:58.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.685 "is_configured": false, 00:17:58.685 "data_offset": 0, 00:17:58.685 "data_size": 0 00:17:58.685 }, 00:17:58.685 { 00:17:58.685 "name": "BaseBdev4", 00:17:58.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.685 "is_configured": false, 00:17:58.685 "data_offset": 0, 00:17:58.685 "data_size": 0 00:17:58.685 } 00:17:58.685 ] 00:17:58.685 }' 00:17:58.685 01:00:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.685 01:00:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.254 01:00:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:59.512 [2024-11-18 01:00:33.793802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:59.512 BaseBdev3 00:17:59.512 01:00:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:59.512 01:00:33 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:59.512 01:00:33 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:59.512 01:00:33 -- common/autotest_common.sh@899 -- # local i 00:17:59.512 01:00:33 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:59.512 01:00:33 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:59.513 01:00:33 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.772 01:00:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:00.031 [ 00:18:00.031 { 00:18:00.031 "name": "BaseBdev3", 00:18:00.031 "aliases": [ 00:18:00.031 "eaf614df-5c1a-4bf3-a2b5-17dfed9c3c40" 00:18:00.031 ], 00:18:00.031 "product_name": "Malloc disk", 00:18:00.031 "block_size": 512, 00:18:00.031 "num_blocks": 65536, 00:18:00.031 "uuid": "eaf614df-5c1a-4bf3-a2b5-17dfed9c3c40", 00:18:00.031 "assigned_rate_limits": { 00:18:00.031 "rw_ios_per_sec": 0, 00:18:00.031 "rw_mbytes_per_sec": 0, 00:18:00.031 "r_mbytes_per_sec": 0, 00:18:00.031 "w_mbytes_per_sec": 0 00:18:00.031 }, 00:18:00.031 "claimed": true, 00:18:00.031 "claim_type": "exclusive_write", 00:18:00.031 "zoned": false, 00:18:00.031 "supported_io_types": { 00:18:00.031 "read": true, 00:18:00.031 "write": true, 00:18:00.031 "unmap": true, 00:18:00.031 "write_zeroes": true, 00:18:00.031 "flush": true, 00:18:00.031 "reset": true, 00:18:00.031 "compare": false, 00:18:00.031 "compare_and_write": false, 00:18:00.031 "abort": true, 00:18:00.031 "nvme_admin": false, 00:18:00.031 "nvme_io": false 00:18:00.031 }, 00:18:00.031 "memory_domains": [ 00:18:00.031 { 00:18:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.031 "dma_device_type": 2 00:18:00.031 } 00:18:00.031 ], 00:18:00.031 "driver_specific": {} 00:18:00.031 } 00:18:00.031 ] 00:18:00.031 01:00:34 -- common/autotest_common.sh@905 -- # return 0 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.031 01:00:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.290 01:00:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.290 "name": "Existed_Raid", 00:18:00.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.290 "strip_size_kb": 64, 00:18:00.290 "state": "configuring", 00:18:00.290 "raid_level": "raid0", 00:18:00.290 "superblock": false, 00:18:00.290 "num_base_bdevs": 4, 00:18:00.290 "num_base_bdevs_discovered": 3, 00:18:00.290 "num_base_bdevs_operational": 4, 00:18:00.290 "base_bdevs_list": [ 00:18:00.291 { 00:18:00.291 "name": "BaseBdev1", 00:18:00.291 "uuid": "dc74cafe-dcd5-4cba-9a20-62501edaff66", 00:18:00.291 "is_configured": true, 00:18:00.291 "data_offset": 0, 00:18:00.291 "data_size": 65536 00:18:00.291 }, 00:18:00.291 { 00:18:00.291 "name": "BaseBdev2", 00:18:00.291 "uuid": "c5960f73-a064-4150-83a6-eb1f801e1df6", 00:18:00.291 "is_configured": true, 00:18:00.291 "data_offset": 0, 00:18:00.291 "data_size": 65536 00:18:00.291 }, 00:18:00.291 { 00:18:00.291 "name": "BaseBdev3", 00:18:00.291 "uuid": "eaf614df-5c1a-4bf3-a2b5-17dfed9c3c40", 00:18:00.291 "is_configured": true, 00:18:00.291 "data_offset": 0, 00:18:00.291 "data_size": 65536 00:18:00.291 }, 00:18:00.291 { 00:18:00.291 "name": "BaseBdev4", 00:18:00.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.291 "is_configured": false, 00:18:00.291 "data_offset": 0, 00:18:00.291 "data_size": 0 00:18:00.291 } 00:18:00.291 ] 00:18:00.291 }' 00:18:00.291 01:00:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.291 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:18:00.860 01:00:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:00.860 [2024-11-18 01:00:35.199665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:00.860 [2024-11-18 01:00:35.200013] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:00.860 [2024-11-18 01:00:35.200059] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:00.860 [2024-11-18 01:00:35.200347] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:00.860 [2024-11-18 01:00:35.200890] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:00.860 [2024-11-18 01:00:35.201000] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:00.860 [2024-11-18 01:00:35.201326] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.860 BaseBdev4 00:18:00.860 01:00:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:00.860 01:00:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:00.860 01:00:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:00.860 01:00:35 -- common/autotest_common.sh@899 -- # local i 00:18:00.860 01:00:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:00.860 01:00:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:00.860 01:00:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:01.118 01:00:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:01.378 [ 00:18:01.378 { 00:18:01.378 "name": "BaseBdev4", 00:18:01.378 "aliases": [ 00:18:01.378 "7ea771ff-572c-415e-9cf3-ead6b1d92aad" 00:18:01.378 ], 00:18:01.378 "product_name": "Malloc disk", 00:18:01.378 "block_size": 512, 00:18:01.378 "num_blocks": 65536, 00:18:01.378 "uuid": "7ea771ff-572c-415e-9cf3-ead6b1d92aad", 00:18:01.378 "assigned_rate_limits": { 00:18:01.378 "rw_ios_per_sec": 0, 00:18:01.378 "rw_mbytes_per_sec": 0, 00:18:01.378 "r_mbytes_per_sec": 0, 00:18:01.378 "w_mbytes_per_sec": 0 00:18:01.378 }, 00:18:01.378 "claimed": true, 00:18:01.378 "claim_type": "exclusive_write", 00:18:01.378 "zoned": false, 00:18:01.378 "supported_io_types": { 00:18:01.378 "read": true, 00:18:01.378 "write": true, 00:18:01.378 "unmap": true, 00:18:01.378 "write_zeroes": true, 00:18:01.378 "flush": true, 00:18:01.378 "reset": true, 00:18:01.378 "compare": false, 00:18:01.378 "compare_and_write": false, 00:18:01.378 "abort": true, 00:18:01.378 "nvme_admin": false, 00:18:01.378 "nvme_io": false 00:18:01.378 }, 00:18:01.378 "memory_domains": [ 00:18:01.378 { 00:18:01.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.378 "dma_device_type": 2 00:18:01.378 } 00:18:01.378 ], 00:18:01.378 "driver_specific": {} 00:18:01.378 } 00:18:01.378 ] 00:18:01.378 01:00:35 -- common/autotest_common.sh@905 -- # return 0 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.378 01:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.637 01:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.637 "name": "Existed_Raid", 00:18:01.637 "uuid": "47fdafd4-e42a-44e3-b471-ace54b2909b7", 00:18:01.637 "strip_size_kb": 64, 00:18:01.637 "state": "online", 00:18:01.637 "raid_level": "raid0", 00:18:01.637 "superblock": false, 00:18:01.637 "num_base_bdevs": 4, 00:18:01.638 "num_base_bdevs_discovered": 4, 00:18:01.638 "num_base_bdevs_operational": 4, 00:18:01.638 "base_bdevs_list": [ 00:18:01.638 { 00:18:01.638 "name": "BaseBdev1", 00:18:01.638 "uuid": "dc74cafe-dcd5-4cba-9a20-62501edaff66", 00:18:01.638 "is_configured": true, 00:18:01.638 "data_offset": 0, 00:18:01.638 "data_size": 65536 00:18:01.638 }, 00:18:01.638 { 00:18:01.638 "name": "BaseBdev2", 00:18:01.638 "uuid": "c5960f73-a064-4150-83a6-eb1f801e1df6", 00:18:01.638 "is_configured": true, 00:18:01.638 "data_offset": 0, 00:18:01.638 "data_size": 65536 00:18:01.638 }, 00:18:01.638 { 00:18:01.638 "name": "BaseBdev3", 00:18:01.638 "uuid": "eaf614df-5c1a-4bf3-a2b5-17dfed9c3c40", 00:18:01.638 "is_configured": true, 00:18:01.638 "data_offset": 0, 00:18:01.638 "data_size": 65536 00:18:01.638 }, 00:18:01.638 { 00:18:01.638 "name": "BaseBdev4", 00:18:01.638 "uuid": "7ea771ff-572c-415e-9cf3-ead6b1d92aad", 00:18:01.638 "is_configured": true, 00:18:01.638 "data_offset": 0, 00:18:01.638 "data_size": 65536 00:18:01.638 } 00:18:01.638 ] 00:18:01.638 }' 00:18:01.638 01:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.638 01:00:35 -- common/autotest_common.sh@10 -- # set +x 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:02.206 [2024-11-18 01:00:36.512160] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.206 [2024-11-18 01:00:36.512400] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.206 [2024-11-18 01:00:36.512581] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.206 01:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.465 01:00:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.465 "name": "Existed_Raid", 00:18:02.465 "uuid": "47fdafd4-e42a-44e3-b471-ace54b2909b7", 00:18:02.465 "strip_size_kb": 64, 00:18:02.465 "state": "offline", 00:18:02.465 "raid_level": "raid0", 00:18:02.465 "superblock": false, 00:18:02.465 "num_base_bdevs": 4, 00:18:02.465 "num_base_bdevs_discovered": 3, 00:18:02.465 "num_base_bdevs_operational": 3, 00:18:02.465 "base_bdevs_list": [ 00:18:02.465 { 00:18:02.465 "name": null, 00:18:02.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.466 "is_configured": false, 00:18:02.466 "data_offset": 0, 00:18:02.466 "data_size": 65536 00:18:02.466 }, 00:18:02.466 { 00:18:02.466 "name": "BaseBdev2", 00:18:02.466 "uuid": "c5960f73-a064-4150-83a6-eb1f801e1df6", 00:18:02.466 "is_configured": true, 00:18:02.466 "data_offset": 0, 00:18:02.466 "data_size": 65536 00:18:02.466 }, 00:18:02.466 { 00:18:02.466 "name": "BaseBdev3", 00:18:02.466 "uuid": "eaf614df-5c1a-4bf3-a2b5-17dfed9c3c40", 00:18:02.466 "is_configured": true, 00:18:02.466 "data_offset": 0, 00:18:02.466 "data_size": 65536 00:18:02.466 }, 00:18:02.466 { 00:18:02.466 "name": "BaseBdev4", 00:18:02.466 "uuid": "7ea771ff-572c-415e-9cf3-ead6b1d92aad", 00:18:02.466 "is_configured": true, 00:18:02.466 "data_offset": 0, 00:18:02.466 "data_size": 65536 00:18:02.466 } 00:18:02.466 ] 00:18:02.466 }' 00:18:02.466 01:00:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.466 01:00:36 -- common/autotest_common.sh@10 -- # set +x 00:18:03.034 01:00:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:03.034 01:00:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:03.034 01:00:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.034 01:00:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:03.293 01:00:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:03.293 01:00:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.293 01:00:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:03.552 [2024-11-18 01:00:37.762895] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.552 01:00:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:03.552 01:00:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:03.552 01:00:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.552 01:00:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:03.811 01:00:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:03.811 01:00:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.811 01:00:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:03.811 [2024-11-18 01:00:38.156344] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:03.811 01:00:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:03.811 01:00:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:03.811 01:00:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:03.811 01:00:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.070 01:00:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:04.070 01:00:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.070 01:00:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:04.329 [2024-11-18 01:00:38.633583] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:04.329 [2024-11-18 01:00:38.633915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:04.329 01:00:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:04.329 01:00:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:04.329 01:00:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.329 01:00:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.588 01:00:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:04.588 01:00:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:04.588 01:00:38 -- bdev/bdev_raid.sh@287 -- # killprocess 129193 00:18:04.588 01:00:38 -- common/autotest_common.sh@936 -- # '[' -z 129193 ']' 00:18:04.588 01:00:38 -- common/autotest_common.sh@940 -- # kill -0 129193 00:18:04.588 01:00:38 -- common/autotest_common.sh@941 -- # uname 00:18:04.588 01:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.588 01:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129193 00:18:04.588 killing process with pid 129193 00:18:04.588 01:00:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:04.588 01:00:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:04.588 01:00:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129193' 00:18:04.588 01:00:38 -- common/autotest_common.sh@955 -- # kill 129193 00:18:04.588 01:00:38 -- common/autotest_common.sh@960 -- # wait 129193 00:18:04.588 [2024-11-18 01:00:38.894747] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.588 [2024-11-18 01:00:38.894849] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.156 ************************************ 00:18:05.156 END TEST raid_state_function_test 00:18:05.156 ************************************ 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:05.156 00:18:05.156 real 0m12.079s 00:18:05.156 user 0m21.227s 00:18:05.156 sys 0m2.281s 00:18:05.156 01:00:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:05.156 01:00:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:05.156 01:00:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:05.156 01:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.156 01:00:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.156 ************************************ 00:18:05.156 START TEST raid_state_function_test_sb 00:18:05.156 ************************************ 00:18:05.156 01:00:39 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=129609 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:05.156 01:00:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129609' 00:18:05.157 Process raid pid: 129609 00:18:05.157 01:00:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129609 /var/tmp/spdk-raid.sock 00:18:05.157 01:00:39 -- common/autotest_common.sh@829 -- # '[' -z 129609 ']' 00:18:05.157 01:00:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:05.157 01:00:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.157 01:00:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:05.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:05.157 01:00:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.157 01:00:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.157 [2024-11-18 01:00:39.420347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:05.157 [2024-11-18 01:00:39.420846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.415 [2024-11-18 01:00:39.564784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.415 [2024-11-18 01:00:39.656416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.415 [2024-11-18 01:00:39.736443] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.982 01:00:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.982 01:00:40 -- common/autotest_common.sh@862 -- # return 0 00:18:05.982 01:00:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:06.240 [2024-11-18 01:00:40.587000] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.240 [2024-11-18 01:00:40.587380] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.240 [2024-11-18 01:00:40.587476] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.240 [2024-11-18 01:00:40.587532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.240 [2024-11-18 01:00:40.587558] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:06.240 [2024-11-18 01:00:40.587630] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:06.240 [2024-11-18 01:00:40.587712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:06.240 [2024-11-18 01:00:40.587769] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.240 01:00:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.498 01:00:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.498 "name": "Existed_Raid", 00:18:06.498 "uuid": "e43d6c51-2085-4a86-9812-424159ff5506", 00:18:06.498 "strip_size_kb": 64, 00:18:06.498 "state": "configuring", 00:18:06.498 "raid_level": "raid0", 00:18:06.498 "superblock": true, 00:18:06.498 "num_base_bdevs": 4, 00:18:06.498 "num_base_bdevs_discovered": 0, 00:18:06.498 "num_base_bdevs_operational": 4, 00:18:06.498 "base_bdevs_list": [ 00:18:06.498 { 00:18:06.498 "name": "BaseBdev1", 00:18:06.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.498 "is_configured": false, 00:18:06.498 "data_offset": 0, 00:18:06.498 "data_size": 0 00:18:06.498 }, 00:18:06.498 { 00:18:06.498 "name": "BaseBdev2", 00:18:06.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.498 "is_configured": false, 00:18:06.498 "data_offset": 0, 00:18:06.498 "data_size": 0 00:18:06.498 }, 00:18:06.498 { 00:18:06.498 "name": "BaseBdev3", 00:18:06.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.498 "is_configured": false, 00:18:06.498 "data_offset": 0, 00:18:06.498 "data_size": 0 00:18:06.498 }, 00:18:06.498 { 00:18:06.498 "name": "BaseBdev4", 00:18:06.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.498 "is_configured": false, 00:18:06.498 "data_offset": 0, 00:18:06.498 "data_size": 0 00:18:06.498 } 00:18:06.498 ] 00:18:06.498 }' 00:18:06.498 01:00:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.498 01:00:40 -- common/autotest_common.sh@10 -- # set +x 00:18:07.064 01:00:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.321 [2024-11-18 01:00:41.607065] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.321 [2024-11-18 01:00:41.607407] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:07.321 01:00:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:07.579 [2024-11-18 01:00:41.807126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.579 [2024-11-18 01:00:41.807502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.579 [2024-11-18 01:00:41.807592] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.579 [2024-11-18 01:00:41.807653] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.580 [2024-11-18 01:00:41.807681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.580 [2024-11-18 01:00:41.807718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.580 [2024-11-18 01:00:41.807794] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.580 [2024-11-18 01:00:41.807848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.580 01:00:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.837 [2024-11-18 01:00:42.015383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.837 BaseBdev1 00:18:07.837 01:00:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:07.837 01:00:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:07.837 01:00:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.837 01:00:42 -- common/autotest_common.sh@899 -- # local i 00:18:07.837 01:00:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.837 01:00:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.837 01:00:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.837 01:00:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.099 [ 00:18:08.099 { 00:18:08.099 "name": "BaseBdev1", 00:18:08.099 "aliases": [ 00:18:08.099 "f158229d-1096-4824-a5b7-4e6e90a19475" 00:18:08.099 ], 00:18:08.099 "product_name": "Malloc disk", 00:18:08.099 "block_size": 512, 00:18:08.099 "num_blocks": 65536, 00:18:08.099 "uuid": "f158229d-1096-4824-a5b7-4e6e90a19475", 00:18:08.099 "assigned_rate_limits": { 00:18:08.099 "rw_ios_per_sec": 0, 00:18:08.099 "rw_mbytes_per_sec": 0, 00:18:08.099 "r_mbytes_per_sec": 0, 00:18:08.099 "w_mbytes_per_sec": 0 00:18:08.099 }, 00:18:08.099 "claimed": true, 00:18:08.099 "claim_type": "exclusive_write", 00:18:08.099 "zoned": false, 00:18:08.099 "supported_io_types": { 00:18:08.099 "read": true, 00:18:08.099 "write": true, 00:18:08.099 "unmap": true, 00:18:08.099 "write_zeroes": true, 00:18:08.099 "flush": true, 00:18:08.099 "reset": true, 00:18:08.099 "compare": false, 00:18:08.099 "compare_and_write": false, 00:18:08.099 "abort": true, 00:18:08.099 "nvme_admin": false, 00:18:08.099 "nvme_io": false 00:18:08.099 }, 00:18:08.099 "memory_domains": [ 00:18:08.099 { 00:18:08.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.099 "dma_device_type": 2 00:18:08.099 } 00:18:08.099 ], 00:18:08.099 "driver_specific": {} 00:18:08.099 } 00:18:08.099 ] 00:18:08.099 01:00:42 -- common/autotest_common.sh@905 -- # return 0 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.099 01:00:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.357 01:00:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.357 "name": "Existed_Raid", 00:18:08.357 "uuid": "9ac562c5-1a18-4920-84f3-983a45118cf7", 00:18:08.357 "strip_size_kb": 64, 00:18:08.357 "state": "configuring", 00:18:08.357 "raid_level": "raid0", 00:18:08.357 "superblock": true, 00:18:08.357 "num_base_bdevs": 4, 00:18:08.357 "num_base_bdevs_discovered": 1, 00:18:08.357 "num_base_bdevs_operational": 4, 00:18:08.357 "base_bdevs_list": [ 00:18:08.357 { 00:18:08.357 "name": "BaseBdev1", 00:18:08.357 "uuid": "f158229d-1096-4824-a5b7-4e6e90a19475", 00:18:08.357 "is_configured": true, 00:18:08.357 "data_offset": 2048, 00:18:08.357 "data_size": 63488 00:18:08.357 }, 00:18:08.357 { 00:18:08.357 "name": "BaseBdev2", 00:18:08.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.357 "is_configured": false, 00:18:08.357 "data_offset": 0, 00:18:08.357 "data_size": 0 00:18:08.357 }, 00:18:08.357 { 00:18:08.357 "name": "BaseBdev3", 00:18:08.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.357 "is_configured": false, 00:18:08.357 "data_offset": 0, 00:18:08.357 "data_size": 0 00:18:08.357 }, 00:18:08.357 { 00:18:08.357 "name": "BaseBdev4", 00:18:08.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.357 "is_configured": false, 00:18:08.357 "data_offset": 0, 00:18:08.357 "data_size": 0 00:18:08.357 } 00:18:08.357 ] 00:18:08.357 }' 00:18:08.357 01:00:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.357 01:00:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.922 01:00:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:09.181 [2024-11-18 01:00:43.347717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.181 [2024-11-18 01:00:43.348066] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:09.181 01:00:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:09.181 01:00:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:09.181 01:00:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:09.439 BaseBdev1 00:18:09.439 01:00:43 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:09.439 01:00:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:09.439 01:00:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:09.439 01:00:43 -- common/autotest_common.sh@899 -- # local i 00:18:09.439 01:00:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:09.439 01:00:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:09.439 01:00:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.697 01:00:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.955 [ 00:18:09.955 { 00:18:09.955 "name": "BaseBdev1", 00:18:09.955 "aliases": [ 00:18:09.955 "6965e720-e802-420b-b9f4-9be0e1cafd12" 00:18:09.955 ], 00:18:09.955 "product_name": "Malloc disk", 00:18:09.955 "block_size": 512, 00:18:09.955 "num_blocks": 65536, 00:18:09.955 "uuid": "6965e720-e802-420b-b9f4-9be0e1cafd12", 00:18:09.955 "assigned_rate_limits": { 00:18:09.955 "rw_ios_per_sec": 0, 00:18:09.955 "rw_mbytes_per_sec": 0, 00:18:09.955 "r_mbytes_per_sec": 0, 00:18:09.955 "w_mbytes_per_sec": 0 00:18:09.955 }, 00:18:09.955 "claimed": false, 00:18:09.955 "zoned": false, 00:18:09.955 "supported_io_types": { 00:18:09.955 "read": true, 00:18:09.955 "write": true, 00:18:09.955 "unmap": true, 00:18:09.955 "write_zeroes": true, 00:18:09.955 "flush": true, 00:18:09.955 "reset": true, 00:18:09.955 "compare": false, 00:18:09.955 "compare_and_write": false, 00:18:09.955 "abort": true, 00:18:09.955 "nvme_admin": false, 00:18:09.955 "nvme_io": false 00:18:09.955 }, 00:18:09.955 "memory_domains": [ 00:18:09.955 { 00:18:09.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.955 "dma_device_type": 2 00:18:09.955 } 00:18:09.955 ], 00:18:09.955 "driver_specific": {} 00:18:09.955 } 00:18:09.955 ] 00:18:09.955 01:00:44 -- common/autotest_common.sh@905 -- # return 0 00:18:09.955 01:00:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:10.213 [2024-11-18 01:00:44.400798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.213 [2024-11-18 01:00:44.403747] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.213 [2024-11-18 01:00:44.404010] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.213 [2024-11-18 01:00:44.404114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.213 [2024-11-18 01:00:44.404184] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.213 [2024-11-18 01:00:44.404218] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:10.213 [2024-11-18 01:00:44.404311] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.213 01:00:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.472 01:00:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.472 "name": "Existed_Raid", 00:18:10.472 "uuid": "23295a7f-42f7-44cb-8027-d46746f4381d", 00:18:10.472 "strip_size_kb": 64, 00:18:10.472 "state": "configuring", 00:18:10.472 "raid_level": "raid0", 00:18:10.472 "superblock": true, 00:18:10.472 "num_base_bdevs": 4, 00:18:10.472 "num_base_bdevs_discovered": 1, 00:18:10.472 "num_base_bdevs_operational": 4, 00:18:10.472 "base_bdevs_list": [ 00:18:10.472 { 00:18:10.472 "name": "BaseBdev1", 00:18:10.472 "uuid": "6965e720-e802-420b-b9f4-9be0e1cafd12", 00:18:10.472 "is_configured": true, 00:18:10.472 "data_offset": 2048, 00:18:10.472 "data_size": 63488 00:18:10.472 }, 00:18:10.472 { 00:18:10.472 "name": "BaseBdev2", 00:18:10.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.472 "is_configured": false, 00:18:10.472 "data_offset": 0, 00:18:10.472 "data_size": 0 00:18:10.472 }, 00:18:10.472 { 00:18:10.472 "name": "BaseBdev3", 00:18:10.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.472 "is_configured": false, 00:18:10.472 "data_offset": 0, 00:18:10.472 "data_size": 0 00:18:10.472 }, 00:18:10.472 { 00:18:10.472 "name": "BaseBdev4", 00:18:10.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.472 "is_configured": false, 00:18:10.472 "data_offset": 0, 00:18:10.472 "data_size": 0 00:18:10.472 } 00:18:10.472 ] 00:18:10.472 }' 00:18:10.472 01:00:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.472 01:00:44 -- common/autotest_common.sh@10 -- # set +x 00:18:11.038 01:00:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:11.038 [2024-11-18 01:00:45.416108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.038 BaseBdev2 00:18:11.038 01:00:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:11.038 01:00:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:11.038 01:00:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:11.038 01:00:45 -- common/autotest_common.sh@899 -- # local i 00:18:11.038 01:00:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:11.296 01:00:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:11.296 01:00:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.554 01:00:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:11.554 [ 00:18:11.554 { 00:18:11.554 "name": "BaseBdev2", 00:18:11.554 "aliases": [ 00:18:11.554 "0fd4b14d-08f6-48cf-9a79-fc4cb1b6fc4f" 00:18:11.554 ], 00:18:11.554 "product_name": "Malloc disk", 00:18:11.554 "block_size": 512, 00:18:11.554 "num_blocks": 65536, 00:18:11.554 "uuid": "0fd4b14d-08f6-48cf-9a79-fc4cb1b6fc4f", 00:18:11.554 "assigned_rate_limits": { 00:18:11.554 "rw_ios_per_sec": 0, 00:18:11.554 "rw_mbytes_per_sec": 0, 00:18:11.554 "r_mbytes_per_sec": 0, 00:18:11.554 "w_mbytes_per_sec": 0 00:18:11.554 }, 00:18:11.554 "claimed": true, 00:18:11.554 "claim_type": "exclusive_write", 00:18:11.554 "zoned": false, 00:18:11.554 "supported_io_types": { 00:18:11.554 "read": true, 00:18:11.554 "write": true, 00:18:11.554 "unmap": true, 00:18:11.554 "write_zeroes": true, 00:18:11.554 "flush": true, 00:18:11.554 "reset": true, 00:18:11.554 "compare": false, 00:18:11.554 "compare_and_write": false, 00:18:11.554 "abort": true, 00:18:11.554 "nvme_admin": false, 00:18:11.554 "nvme_io": false 00:18:11.554 }, 00:18:11.554 "memory_domains": [ 00:18:11.554 { 00:18:11.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.554 "dma_device_type": 2 00:18:11.554 } 00:18:11.554 ], 00:18:11.554 "driver_specific": {} 00:18:11.554 } 00:18:11.554 ] 00:18:11.554 01:00:45 -- common/autotest_common.sh@905 -- # return 0 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.554 01:00:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.812 01:00:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.812 "name": "Existed_Raid", 00:18:11.812 "uuid": "23295a7f-42f7-44cb-8027-d46746f4381d", 00:18:11.812 "strip_size_kb": 64, 00:18:11.812 "state": "configuring", 00:18:11.812 "raid_level": "raid0", 00:18:11.812 "superblock": true, 00:18:11.812 "num_base_bdevs": 4, 00:18:11.812 "num_base_bdevs_discovered": 2, 00:18:11.812 "num_base_bdevs_operational": 4, 00:18:11.812 "base_bdevs_list": [ 00:18:11.812 { 00:18:11.812 "name": "BaseBdev1", 00:18:11.812 "uuid": "6965e720-e802-420b-b9f4-9be0e1cafd12", 00:18:11.812 "is_configured": true, 00:18:11.812 "data_offset": 2048, 00:18:11.812 "data_size": 63488 00:18:11.812 }, 00:18:11.812 { 00:18:11.812 "name": "BaseBdev2", 00:18:11.812 "uuid": "0fd4b14d-08f6-48cf-9a79-fc4cb1b6fc4f", 00:18:11.812 "is_configured": true, 00:18:11.812 "data_offset": 2048, 00:18:11.812 "data_size": 63488 00:18:11.812 }, 00:18:11.812 { 00:18:11.812 "name": "BaseBdev3", 00:18:11.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.812 "is_configured": false, 00:18:11.812 "data_offset": 0, 00:18:11.812 "data_size": 0 00:18:11.812 }, 00:18:11.812 { 00:18:11.812 "name": "BaseBdev4", 00:18:11.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.812 "is_configured": false, 00:18:11.812 "data_offset": 0, 00:18:11.812 "data_size": 0 00:18:11.812 } 00:18:11.812 ] 00:18:11.812 }' 00:18:11.812 01:00:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.812 01:00:46 -- common/autotest_common.sh@10 -- # set +x 00:18:12.377 01:00:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:12.635 [2024-11-18 01:00:46.941960] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.635 BaseBdev3 00:18:12.635 01:00:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:12.635 01:00:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:12.635 01:00:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:12.635 01:00:46 -- common/autotest_common.sh@899 -- # local i 00:18:12.635 01:00:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:12.635 01:00:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:12.635 01:00:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.894 01:00:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:13.151 [ 00:18:13.151 { 00:18:13.151 "name": "BaseBdev3", 00:18:13.151 "aliases": [ 00:18:13.151 "45f3c26b-aa56-4d94-b705-64cc0e466678" 00:18:13.152 ], 00:18:13.152 "product_name": "Malloc disk", 00:18:13.152 "block_size": 512, 00:18:13.152 "num_blocks": 65536, 00:18:13.152 "uuid": "45f3c26b-aa56-4d94-b705-64cc0e466678", 00:18:13.152 "assigned_rate_limits": { 00:18:13.152 "rw_ios_per_sec": 0, 00:18:13.152 "rw_mbytes_per_sec": 0, 00:18:13.152 "r_mbytes_per_sec": 0, 00:18:13.152 "w_mbytes_per_sec": 0 00:18:13.152 }, 00:18:13.152 "claimed": true, 00:18:13.152 "claim_type": "exclusive_write", 00:18:13.152 "zoned": false, 00:18:13.152 "supported_io_types": { 00:18:13.152 "read": true, 00:18:13.152 "write": true, 00:18:13.152 "unmap": true, 00:18:13.152 "write_zeroes": true, 00:18:13.152 "flush": true, 00:18:13.152 "reset": true, 00:18:13.152 "compare": false, 00:18:13.152 "compare_and_write": false, 00:18:13.152 "abort": true, 00:18:13.152 "nvme_admin": false, 00:18:13.152 "nvme_io": false 00:18:13.152 }, 00:18:13.152 "memory_domains": [ 00:18:13.152 { 00:18:13.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.152 "dma_device_type": 2 00:18:13.152 } 00:18:13.152 ], 00:18:13.152 "driver_specific": {} 00:18:13.152 } 00:18:13.152 ] 00:18:13.152 01:00:47 -- common/autotest_common.sh@905 -- # return 0 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.152 01:00:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.410 01:00:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.410 "name": "Existed_Raid", 00:18:13.410 "uuid": "23295a7f-42f7-44cb-8027-d46746f4381d", 00:18:13.410 "strip_size_kb": 64, 00:18:13.410 "state": "configuring", 00:18:13.410 "raid_level": "raid0", 00:18:13.410 "superblock": true, 00:18:13.410 "num_base_bdevs": 4, 00:18:13.410 "num_base_bdevs_discovered": 3, 00:18:13.410 "num_base_bdevs_operational": 4, 00:18:13.410 "base_bdevs_list": [ 00:18:13.410 { 00:18:13.410 "name": "BaseBdev1", 00:18:13.410 "uuid": "6965e720-e802-420b-b9f4-9be0e1cafd12", 00:18:13.410 "is_configured": true, 00:18:13.410 "data_offset": 2048, 00:18:13.410 "data_size": 63488 00:18:13.410 }, 00:18:13.410 { 00:18:13.410 "name": "BaseBdev2", 00:18:13.410 "uuid": "0fd4b14d-08f6-48cf-9a79-fc4cb1b6fc4f", 00:18:13.410 "is_configured": true, 00:18:13.410 "data_offset": 2048, 00:18:13.410 "data_size": 63488 00:18:13.410 }, 00:18:13.410 { 00:18:13.410 "name": "BaseBdev3", 00:18:13.410 "uuid": "45f3c26b-aa56-4d94-b705-64cc0e466678", 00:18:13.410 "is_configured": true, 00:18:13.410 "data_offset": 2048, 00:18:13.410 "data_size": 63488 00:18:13.410 }, 00:18:13.410 { 00:18:13.410 "name": "BaseBdev4", 00:18:13.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.410 "is_configured": false, 00:18:13.410 "data_offset": 0, 00:18:13.410 "data_size": 0 00:18:13.410 } 00:18:13.410 ] 00:18:13.410 }' 00:18:13.410 01:00:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.410 01:00:47 -- common/autotest_common.sh@10 -- # set +x 00:18:13.976 01:00:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:13.976 [2024-11-18 01:00:48.308169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:13.976 [2024-11-18 01:00:48.308417] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:18:13.976 [2024-11-18 01:00:48.308430] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:13.976 [2024-11-18 01:00:48.308596] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:18:13.976 [2024-11-18 01:00:48.309043] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:18:13.976 [2024-11-18 01:00:48.309053] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:18:13.976 [2024-11-18 01:00:48.309206] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.976 BaseBdev4 00:18:13.976 01:00:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:13.976 01:00:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:13.976 01:00:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:13.976 01:00:48 -- common/autotest_common.sh@899 -- # local i 00:18:13.976 01:00:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:13.976 01:00:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:13.976 01:00:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.236 01:00:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:14.506 [ 00:18:14.506 { 00:18:14.506 "name": "BaseBdev4", 00:18:14.506 "aliases": [ 00:18:14.506 "a5a4dbef-fc9d-42f7-82da-042f3d7dc90c" 00:18:14.506 ], 00:18:14.506 "product_name": "Malloc disk", 00:18:14.506 "block_size": 512, 00:18:14.506 "num_blocks": 65536, 00:18:14.506 "uuid": "a5a4dbef-fc9d-42f7-82da-042f3d7dc90c", 00:18:14.506 "assigned_rate_limits": { 00:18:14.506 "rw_ios_per_sec": 0, 00:18:14.506 "rw_mbytes_per_sec": 0, 00:18:14.506 "r_mbytes_per_sec": 0, 00:18:14.506 "w_mbytes_per_sec": 0 00:18:14.506 }, 00:18:14.506 "claimed": true, 00:18:14.506 "claim_type": "exclusive_write", 00:18:14.506 "zoned": false, 00:18:14.506 "supported_io_types": { 00:18:14.506 "read": true, 00:18:14.506 "write": true, 00:18:14.506 "unmap": true, 00:18:14.506 "write_zeroes": true, 00:18:14.506 "flush": true, 00:18:14.506 "reset": true, 00:18:14.506 "compare": false, 00:18:14.506 "compare_and_write": false, 00:18:14.506 "abort": true, 00:18:14.506 "nvme_admin": false, 00:18:14.506 "nvme_io": false 00:18:14.506 }, 00:18:14.506 "memory_domains": [ 00:18:14.506 { 00:18:14.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.506 "dma_device_type": 2 00:18:14.506 } 00:18:14.506 ], 00:18:14.506 "driver_specific": {} 00:18:14.506 } 00:18:14.506 ] 00:18:14.506 01:00:48 -- common/autotest_common.sh@905 -- # return 0 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.506 01:00:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.788 01:00:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.788 "name": "Existed_Raid", 00:18:14.788 "uuid": "23295a7f-42f7-44cb-8027-d46746f4381d", 00:18:14.788 "strip_size_kb": 64, 00:18:14.788 "state": "online", 00:18:14.788 "raid_level": "raid0", 00:18:14.788 "superblock": true, 00:18:14.788 "num_base_bdevs": 4, 00:18:14.788 "num_base_bdevs_discovered": 4, 00:18:14.788 "num_base_bdevs_operational": 4, 00:18:14.788 "base_bdevs_list": [ 00:18:14.788 { 00:18:14.788 "name": "BaseBdev1", 00:18:14.788 "uuid": "6965e720-e802-420b-b9f4-9be0e1cafd12", 00:18:14.788 "is_configured": true, 00:18:14.788 "data_offset": 2048, 00:18:14.788 "data_size": 63488 00:18:14.788 }, 00:18:14.788 { 00:18:14.788 "name": "BaseBdev2", 00:18:14.788 "uuid": "0fd4b14d-08f6-48cf-9a79-fc4cb1b6fc4f", 00:18:14.788 "is_configured": true, 00:18:14.788 "data_offset": 2048, 00:18:14.788 "data_size": 63488 00:18:14.788 }, 00:18:14.788 { 00:18:14.788 "name": "BaseBdev3", 00:18:14.788 "uuid": "45f3c26b-aa56-4d94-b705-64cc0e466678", 00:18:14.788 "is_configured": true, 00:18:14.788 "data_offset": 2048, 00:18:14.788 "data_size": 63488 00:18:14.788 }, 00:18:14.788 { 00:18:14.788 "name": "BaseBdev4", 00:18:14.788 "uuid": "a5a4dbef-fc9d-42f7-82da-042f3d7dc90c", 00:18:14.788 "is_configured": true, 00:18:14.788 "data_offset": 2048, 00:18:14.788 "data_size": 63488 00:18:14.788 } 00:18:14.788 ] 00:18:14.788 }' 00:18:14.788 01:00:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.788 01:00:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.367 01:00:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:15.626 [2024-11-18 01:00:49.780696] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.626 [2024-11-18 01:00:49.780756] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.626 [2024-11-18 01:00:49.780849] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.626 01:00:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.885 01:00:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.885 "name": "Existed_Raid", 00:18:15.885 "uuid": "23295a7f-42f7-44cb-8027-d46746f4381d", 00:18:15.885 "strip_size_kb": 64, 00:18:15.885 "state": "offline", 00:18:15.885 "raid_level": "raid0", 00:18:15.885 "superblock": true, 00:18:15.885 "num_base_bdevs": 4, 00:18:15.885 "num_base_bdevs_discovered": 3, 00:18:15.885 "num_base_bdevs_operational": 3, 00:18:15.885 "base_bdevs_list": [ 00:18:15.885 { 00:18:15.885 "name": null, 00:18:15.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.885 "is_configured": false, 00:18:15.885 "data_offset": 2048, 00:18:15.885 "data_size": 63488 00:18:15.885 }, 00:18:15.885 { 00:18:15.885 "name": "BaseBdev2", 00:18:15.885 "uuid": "0fd4b14d-08f6-48cf-9a79-fc4cb1b6fc4f", 00:18:15.885 "is_configured": true, 00:18:15.885 "data_offset": 2048, 00:18:15.885 "data_size": 63488 00:18:15.885 }, 00:18:15.885 { 00:18:15.885 "name": "BaseBdev3", 00:18:15.885 "uuid": "45f3c26b-aa56-4d94-b705-64cc0e466678", 00:18:15.885 "is_configured": true, 00:18:15.885 "data_offset": 2048, 00:18:15.885 "data_size": 63488 00:18:15.885 }, 00:18:15.885 { 00:18:15.885 "name": "BaseBdev4", 00:18:15.885 "uuid": "a5a4dbef-fc9d-42f7-82da-042f3d7dc90c", 00:18:15.885 "is_configured": true, 00:18:15.885 "data_offset": 2048, 00:18:15.885 "data_size": 63488 00:18:15.885 } 00:18:15.885 ] 00:18:15.885 }' 00:18:15.885 01:00:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.885 01:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 01:00:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:16.452 01:00:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.452 01:00:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.453 01:00:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.711 01:00:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.711 01:00:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.711 01:00:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:16.971 [2024-11-18 01:00:51.132985] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.971 01:00:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.971 01:00:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.971 01:00:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.971 01:00:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.971 01:00:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.971 01:00:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.971 01:00:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:17.230 [2024-11-18 01:00:51.594374] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:17.489 01:00:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:17.489 01:00:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:17.489 01:00:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.489 01:00:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:17.748 01:00:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:17.748 01:00:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.748 01:00:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:17.748 [2024-11-18 01:00:52.072000] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:17.748 [2024-11-18 01:00:52.072344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:18:17.748 01:00:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:17.748 01:00:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:17.748 01:00:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.748 01:00:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:18.007 01:00:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:18.007 01:00:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:18.007 01:00:52 -- bdev/bdev_raid.sh@287 -- # killprocess 129609 00:18:18.007 01:00:52 -- common/autotest_common.sh@936 -- # '[' -z 129609 ']' 00:18:18.007 01:00:52 -- common/autotest_common.sh@940 -- # kill -0 129609 00:18:18.007 01:00:52 -- common/autotest_common.sh@941 -- # uname 00:18:18.007 01:00:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.007 01:00:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129609 00:18:18.007 01:00:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.007 01:00:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.007 01:00:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129609' 00:18:18.007 killing process with pid 129609 00:18:18.007 01:00:52 -- common/autotest_common.sh@955 -- # kill 129609 00:18:18.007 01:00:52 -- common/autotest_common.sh@960 -- # wait 129609 00:18:18.007 [2024-11-18 01:00:52.347565] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.007 [2024-11-18 01:00:52.347697] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.576 ************************************ 00:18:18.577 END TEST raid_state_function_test_sb 00:18:18.577 ************************************ 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:18.577 00:18:18.577 real 0m13.393s 00:18:18.577 user 0m23.669s 00:18:18.577 sys 0m2.423s 00:18:18.577 01:00:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:18.577 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:18.577 01:00:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:18.577 01:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.577 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 ************************************ 00:18:18.577 START TEST raid_superblock_test 00:18:18.577 ************************************ 00:18:18.577 01:00:52 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=130038 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130038 /var/tmp/spdk-raid.sock 00:18:18.577 01:00:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:18.577 01:00:52 -- common/autotest_common.sh@829 -- # '[' -z 130038 ']' 00:18:18.577 01:00:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:18.577 01:00:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.577 01:00:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:18.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:18.577 01:00:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.577 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 [2024-11-18 01:00:52.900643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:18.577 [2024-11-18 01:00:52.901137] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130038 ] 00:18:18.836 [2024-11-18 01:00:53.044633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.836 [2024-11-18 01:00:53.137585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.836 [2024-11-18 01:00:53.215953] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.772 01:00:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.772 01:00:53 -- common/autotest_common.sh@862 -- # return 0 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.772 01:00:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:19.772 malloc1 00:18:19.772 01:00:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.030 [2024-11-18 01:00:54.246630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.031 [2024-11-18 01:00:54.247063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.031 [2024-11-18 01:00:54.247171] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:20.031 [2024-11-18 01:00:54.247352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.031 [2024-11-18 01:00:54.250681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.031 [2024-11-18 01:00:54.250902] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.031 pt1 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.031 01:00:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:20.290 malloc2 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.290 [2024-11-18 01:00:54.639189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.290 [2024-11-18 01:00:54.639595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.290 [2024-11-18 01:00:54.639712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:20.290 [2024-11-18 01:00:54.639871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.290 [2024-11-18 01:00:54.642954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.290 [2024-11-18 01:00:54.643165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.290 pt2 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.290 01:00:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:20.548 malloc3 00:18:20.548 01:00:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:20.807 [2024-11-18 01:00:55.134841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:20.807 [2024-11-18 01:00:55.135271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.807 [2024-11-18 01:00:55.135379] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:20.807 [2024-11-18 01:00:55.135512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.807 [2024-11-18 01:00:55.138734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.807 [2024-11-18 01:00:55.138938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:20.807 pt3 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.807 01:00:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:21.066 malloc4 00:18:21.066 01:00:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:21.325 [2024-11-18 01:00:55.539201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:21.325 [2024-11-18 01:00:55.539651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.325 [2024-11-18 01:00:55.539735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:21.325 [2024-11-18 01:00:55.539888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.325 [2024-11-18 01:00:55.542899] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.325 [2024-11-18 01:00:55.543107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:21.325 pt4 00:18:21.325 01:00:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:21.325 01:00:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:21.325 01:00:55 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:21.584 [2024-11-18 01:00:55.731619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.584 [2024-11-18 01:00:55.734505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.584 [2024-11-18 01:00:55.734721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:21.584 [2024-11-18 01:00:55.734802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:21.584 [2024-11-18 01:00:55.735166] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:21.584 [2024-11-18 01:00:55.735262] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:21.584 [2024-11-18 01:00:55.735467] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:21.584 [2024-11-18 01:00:55.736068] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:21.584 [2024-11-18 01:00:55.736179] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:21.584 [2024-11-18 01:00:55.736492] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.584 01:00:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.842 01:00:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.842 "name": "raid_bdev1", 00:18:21.842 "uuid": "91ccc718-d36f-4b15-a3f9-14dda26dd459", 00:18:21.842 "strip_size_kb": 64, 00:18:21.842 "state": "online", 00:18:21.842 "raid_level": "raid0", 00:18:21.842 "superblock": true, 00:18:21.842 "num_base_bdevs": 4, 00:18:21.842 "num_base_bdevs_discovered": 4, 00:18:21.842 "num_base_bdevs_operational": 4, 00:18:21.842 "base_bdevs_list": [ 00:18:21.842 { 00:18:21.842 "name": "pt1", 00:18:21.842 "uuid": "5384e9ad-bfb2-5b76-a833-775e4cf2de45", 00:18:21.842 "is_configured": true, 00:18:21.842 "data_offset": 2048, 00:18:21.842 "data_size": 63488 00:18:21.842 }, 00:18:21.842 { 00:18:21.842 "name": "pt2", 00:18:21.842 "uuid": "a01827d2-ed04-5407-a529-4698487b28cb", 00:18:21.842 "is_configured": true, 00:18:21.842 "data_offset": 2048, 00:18:21.842 "data_size": 63488 00:18:21.842 }, 00:18:21.842 { 00:18:21.842 "name": "pt3", 00:18:21.842 "uuid": "6aa6d9ea-30fd-5214-b2ae-1840b96cbb0b", 00:18:21.842 "is_configured": true, 00:18:21.842 "data_offset": 2048, 00:18:21.842 "data_size": 63488 00:18:21.842 }, 00:18:21.842 { 00:18:21.842 "name": "pt4", 00:18:21.842 "uuid": "a9e3e638-7cf9-510f-a788-4377ce4b6026", 00:18:21.842 "is_configured": true, 00:18:21.842 "data_offset": 2048, 00:18:21.842 "data_size": 63488 00:18:21.842 } 00:18:21.842 ] 00:18:21.842 }' 00:18:21.842 01:00:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.842 01:00:56 -- common/autotest_common.sh@10 -- # set +x 00:18:22.410 01:00:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:22.410 01:00:56 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:22.668 [2024-11-18 01:00:56.840966] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.668 01:00:56 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=91ccc718-d36f-4b15-a3f9-14dda26dd459 00:18:22.668 01:00:56 -- bdev/bdev_raid.sh@380 -- # '[' -z 91ccc718-d36f-4b15-a3f9-14dda26dd459 ']' 00:18:22.668 01:00:56 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:22.668 [2024-11-18 01:00:57.040711] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.669 [2024-11-18 01:00:57.041016] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.669 [2024-11-18 01:00:57.041275] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.669 [2024-11-18 01:00:57.041460] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.669 [2024-11-18 01:00:57.041537] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:22.669 01:00:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.669 01:00:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:22.927 01:00:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:22.927 01:00:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:22.927 01:00:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.927 01:00:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:23.186 01:00:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.186 01:00:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:23.444 01:00:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.444 01:00:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:23.702 01:00:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.702 01:00:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:23.702 01:00:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:23.702 01:00:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:23.960 01:00:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:23.960 01:00:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:23.960 01:00:58 -- common/autotest_common.sh@650 -- # local es=0 00:18:23.960 01:00:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:23.960 01:00:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.960 01:00:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.960 01:00:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.960 01:00:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.960 01:00:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.960 01:00:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.960 01:00:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.960 01:00:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:23.960 01:00:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:24.219 [2024-11-18 01:00:58.488981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:24.219 [2024-11-18 01:00:58.491865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:24.219 [2024-11-18 01:00:58.492070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:24.219 [2024-11-18 01:00:58.492139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:24.219 [2024-11-18 01:00:58.492276] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:24.219 [2024-11-18 01:00:58.492408] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:24.219 [2024-11-18 01:00:58.492639] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:24.219 [2024-11-18 01:00:58.492727] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:24.219 [2024-11-18 01:00:58.492800] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.219 [2024-11-18 01:00:58.492833] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:18:24.219 request: 00:18:24.219 { 00:18:24.219 "name": "raid_bdev1", 00:18:24.219 "raid_level": "raid0", 00:18:24.219 "base_bdevs": [ 00:18:24.219 "malloc1", 00:18:24.219 "malloc2", 00:18:24.219 "malloc3", 00:18:24.219 "malloc4" 00:18:24.219 ], 00:18:24.219 "superblock": false, 00:18:24.219 "strip_size_kb": 64, 00:18:24.219 "method": "bdev_raid_create", 00:18:24.219 "req_id": 1 00:18:24.219 } 00:18:24.219 Got JSON-RPC error response 00:18:24.219 response: 00:18:24.219 { 00:18:24.219 "code": -17, 00:18:24.219 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:24.219 } 00:18:24.219 01:00:58 -- common/autotest_common.sh@653 -- # es=1 00:18:24.219 01:00:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.219 01:00:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.219 01:00:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.219 01:00:58 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.219 01:00:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:24.478 01:00:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:24.478 01:00:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:24.478 01:00:58 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.737 [2024-11-18 01:00:58.889188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.737 [2024-11-18 01:00:58.889540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.737 [2024-11-18 01:00:58.889616] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:24.737 [2024-11-18 01:00:58.889716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.737 [2024-11-18 01:00:58.892956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.737 [2024-11-18 01:00:58.893144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.737 [2024-11-18 01:00:58.893347] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:24.737 [2024-11-18 01:00:58.893498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.737 pt1 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.737 01:00:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.994 01:00:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.994 "name": "raid_bdev1", 00:18:24.994 "uuid": "91ccc718-d36f-4b15-a3f9-14dda26dd459", 00:18:24.994 "strip_size_kb": 64, 00:18:24.994 "state": "configuring", 00:18:24.995 "raid_level": "raid0", 00:18:24.995 "superblock": true, 00:18:24.995 "num_base_bdevs": 4, 00:18:24.995 "num_base_bdevs_discovered": 1, 00:18:24.995 "num_base_bdevs_operational": 4, 00:18:24.995 "base_bdevs_list": [ 00:18:24.995 { 00:18:24.995 "name": "pt1", 00:18:24.995 "uuid": "5384e9ad-bfb2-5b76-a833-775e4cf2de45", 00:18:24.995 "is_configured": true, 00:18:24.995 "data_offset": 2048, 00:18:24.995 "data_size": 63488 00:18:24.995 }, 00:18:24.995 { 00:18:24.995 "name": null, 00:18:24.995 "uuid": "a01827d2-ed04-5407-a529-4698487b28cb", 00:18:24.995 "is_configured": false, 00:18:24.995 "data_offset": 2048, 00:18:24.995 "data_size": 63488 00:18:24.995 }, 00:18:24.995 { 00:18:24.995 "name": null, 00:18:24.995 "uuid": "6aa6d9ea-30fd-5214-b2ae-1840b96cbb0b", 00:18:24.995 "is_configured": false, 00:18:24.995 "data_offset": 2048, 00:18:24.995 "data_size": 63488 00:18:24.995 }, 00:18:24.995 { 00:18:24.995 "name": null, 00:18:24.995 "uuid": "a9e3e638-7cf9-510f-a788-4377ce4b6026", 00:18:24.995 "is_configured": false, 00:18:24.995 "data_offset": 2048, 00:18:24.995 "data_size": 63488 00:18:24.995 } 00:18:24.995 ] 00:18:24.995 }' 00:18:24.995 01:00:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.995 01:00:59 -- common/autotest_common.sh@10 -- # set +x 00:18:25.561 01:00:59 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:25.561 01:00:59 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.820 [2024-11-18 01:00:59.974797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.820 [2024-11-18 01:00:59.975203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.820 [2024-11-18 01:00:59.975299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:25.820 [2024-11-18 01:00:59.975419] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.820 [2024-11-18 01:00:59.976013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.820 [2024-11-18 01:00:59.976102] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.820 [2024-11-18 01:00:59.976246] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:25.820 [2024-11-18 01:00:59.976389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.820 pt2 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:25.820 [2024-11-18 01:01:00.182866] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.820 01:01:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.080 01:01:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.080 "name": "raid_bdev1", 00:18:26.080 "uuid": "91ccc718-d36f-4b15-a3f9-14dda26dd459", 00:18:26.080 "strip_size_kb": 64, 00:18:26.080 "state": "configuring", 00:18:26.080 "raid_level": "raid0", 00:18:26.080 "superblock": true, 00:18:26.080 "num_base_bdevs": 4, 00:18:26.080 "num_base_bdevs_discovered": 1, 00:18:26.080 "num_base_bdevs_operational": 4, 00:18:26.080 "base_bdevs_list": [ 00:18:26.080 { 00:18:26.080 "name": "pt1", 00:18:26.080 "uuid": "5384e9ad-bfb2-5b76-a833-775e4cf2de45", 00:18:26.080 "is_configured": true, 00:18:26.080 "data_offset": 2048, 00:18:26.080 "data_size": 63488 00:18:26.080 }, 00:18:26.080 { 00:18:26.080 "name": null, 00:18:26.080 "uuid": "a01827d2-ed04-5407-a529-4698487b28cb", 00:18:26.080 "is_configured": false, 00:18:26.080 "data_offset": 2048, 00:18:26.080 "data_size": 63488 00:18:26.080 }, 00:18:26.080 { 00:18:26.080 "name": null, 00:18:26.080 "uuid": "6aa6d9ea-30fd-5214-b2ae-1840b96cbb0b", 00:18:26.080 "is_configured": false, 00:18:26.080 "data_offset": 2048, 00:18:26.080 "data_size": 63488 00:18:26.080 }, 00:18:26.080 { 00:18:26.080 "name": null, 00:18:26.080 "uuid": "a9e3e638-7cf9-510f-a788-4377ce4b6026", 00:18:26.080 "is_configured": false, 00:18:26.080 "data_offset": 2048, 00:18:26.080 "data_size": 63488 00:18:26.080 } 00:18:26.080 ] 00:18:26.080 }' 00:18:26.080 01:01:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.080 01:01:00 -- common/autotest_common.sh@10 -- # set +x 00:18:26.647 01:01:00 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:26.647 01:01:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.647 01:01:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.907 [2024-11-18 01:01:01.199019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.907 [2024-11-18 01:01:01.199367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.907 [2024-11-18 01:01:01.199452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:26.907 [2024-11-18 01:01:01.199510] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.907 [2024-11-18 01:01:01.200075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.907 [2024-11-18 01:01:01.200243] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.907 [2024-11-18 01:01:01.200383] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:26.907 [2024-11-18 01:01:01.200437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.907 pt2 00:18:26.907 01:01:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:26.907 01:01:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.907 01:01:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:27.166 [2024-11-18 01:01:01.403108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:27.166 [2024-11-18 01:01:01.403535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.166 [2024-11-18 01:01:01.403611] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:27.166 [2024-11-18 01:01:01.403723] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.166 [2024-11-18 01:01:01.404367] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.166 [2024-11-18 01:01:01.404521] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:27.166 [2024-11-18 01:01:01.404654] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:27.166 [2024-11-18 01:01:01.404703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:27.166 pt3 00:18:27.166 01:01:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:27.166 01:01:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:27.166 01:01:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:27.425 [2024-11-18 01:01:01.683134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:27.425 [2024-11-18 01:01:01.683520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.425 [2024-11-18 01:01:01.683594] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:27.425 [2024-11-18 01:01:01.683707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.425 [2024-11-18 01:01:01.684233] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.425 [2024-11-18 01:01:01.684424] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:27.425 [2024-11-18 01:01:01.684601] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:27.425 [2024-11-18 01:01:01.684697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:27.425 [2024-11-18 01:01:01.684878] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:27.425 [2024-11-18 01:01:01.685043] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:27.425 [2024-11-18 01:01:01.685165] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:18:27.425 [2024-11-18 01:01:01.685602] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:27.425 [2024-11-18 01:01:01.685709] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:27.425 [2024-11-18 01:01:01.685884] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.425 pt4 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.425 01:01:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.684 01:01:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.684 "name": "raid_bdev1", 00:18:27.684 "uuid": "91ccc718-d36f-4b15-a3f9-14dda26dd459", 00:18:27.684 "strip_size_kb": 64, 00:18:27.684 "state": "online", 00:18:27.684 "raid_level": "raid0", 00:18:27.684 "superblock": true, 00:18:27.684 "num_base_bdevs": 4, 00:18:27.684 "num_base_bdevs_discovered": 4, 00:18:27.684 "num_base_bdevs_operational": 4, 00:18:27.684 "base_bdevs_list": [ 00:18:27.684 { 00:18:27.684 "name": "pt1", 00:18:27.684 "uuid": "5384e9ad-bfb2-5b76-a833-775e4cf2de45", 00:18:27.684 "is_configured": true, 00:18:27.684 "data_offset": 2048, 00:18:27.684 "data_size": 63488 00:18:27.684 }, 00:18:27.684 { 00:18:27.684 "name": "pt2", 00:18:27.684 "uuid": "a01827d2-ed04-5407-a529-4698487b28cb", 00:18:27.684 "is_configured": true, 00:18:27.684 "data_offset": 2048, 00:18:27.684 "data_size": 63488 00:18:27.684 }, 00:18:27.684 { 00:18:27.684 "name": "pt3", 00:18:27.684 "uuid": "6aa6d9ea-30fd-5214-b2ae-1840b96cbb0b", 00:18:27.684 "is_configured": true, 00:18:27.684 "data_offset": 2048, 00:18:27.684 "data_size": 63488 00:18:27.684 }, 00:18:27.684 { 00:18:27.684 "name": "pt4", 00:18:27.684 "uuid": "a9e3e638-7cf9-510f-a788-4377ce4b6026", 00:18:27.684 "is_configured": true, 00:18:27.684 "data_offset": 2048, 00:18:27.684 "data_size": 63488 00:18:27.684 } 00:18:27.684 ] 00:18:27.684 }' 00:18:27.684 01:01:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.684 01:01:01 -- common/autotest_common.sh@10 -- # set +x 00:18:28.252 01:01:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:28.252 01:01:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:28.511 [2024-11-18 01:01:02.755579] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.511 01:01:02 -- bdev/bdev_raid.sh@430 -- # '[' 91ccc718-d36f-4b15-a3f9-14dda26dd459 '!=' 91ccc718-d36f-4b15-a3f9-14dda26dd459 ']' 00:18:28.511 01:01:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:28.511 01:01:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:28.511 01:01:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:28.511 01:01:02 -- bdev/bdev_raid.sh@511 -- # killprocess 130038 00:18:28.511 01:01:02 -- common/autotest_common.sh@936 -- # '[' -z 130038 ']' 00:18:28.511 01:01:02 -- common/autotest_common.sh@940 -- # kill -0 130038 00:18:28.511 01:01:02 -- common/autotest_common.sh@941 -- # uname 00:18:28.511 01:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:28.511 01:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130038 00:18:28.511 01:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:28.511 killing process with pid 130038 00:18:28.511 01:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:28.511 01:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130038' 00:18:28.511 01:01:02 -- common/autotest_common.sh@955 -- # kill 130038 00:18:28.511 01:01:02 -- common/autotest_common.sh@960 -- # wait 130038 00:18:28.511 [2024-11-18 01:01:02.814899] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.511 [2024-11-18 01:01:02.814996] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.511 [2024-11-18 01:01:02.815071] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.511 [2024-11-18 01:01:02.815080] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:28.511 [2024-11-18 01:01:02.897996] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.079 ************************************ 00:18:29.079 END TEST raid_superblock_test 00:18:29.079 ************************************ 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:29.079 00:18:29.079 real 0m10.463s 00:18:29.079 user 0m18.064s 00:18:29.079 sys 0m1.991s 00:18:29.079 01:01:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:29.079 01:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:29.079 01:01:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:29.079 01:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.079 01:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:29.079 ************************************ 00:18:29.079 START TEST raid_state_function_test 00:18:29.079 ************************************ 00:18:29.079 01:01:03 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=130354 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130354' 00:18:29.079 Process raid pid: 130354 00:18:29.079 01:01:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130354 /var/tmp/spdk-raid.sock 00:18:29.079 01:01:03 -- common/autotest_common.sh@829 -- # '[' -z 130354 ']' 00:18:29.079 01:01:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:29.079 01:01:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.079 01:01:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:29.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:29.079 01:01:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.079 01:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:29.079 [2024-11-18 01:01:03.435290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:29.079 [2024-11-18 01:01:03.435643] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.338 [2024-11-18 01:01:03.587479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.338 [2024-11-18 01:01:03.674305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.596 [2024-11-18 01:01:03.757036] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.164 01:01:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.164 01:01:04 -- common/autotest_common.sh@862 -- # return 0 00:18:30.164 01:01:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:30.422 [2024-11-18 01:01:04.577948] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.422 [2024-11-18 01:01:04.578327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.422 [2024-11-18 01:01:04.578424] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.422 [2024-11-18 01:01:04.578477] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.422 [2024-11-18 01:01:04.578504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:30.422 [2024-11-18 01:01:04.578575] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:30.422 [2024-11-18 01:01:04.578661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:30.422 [2024-11-18 01:01:04.578718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.422 01:01:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.682 01:01:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.682 "name": "Existed_Raid", 00:18:30.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.682 "strip_size_kb": 64, 00:18:30.682 "state": "configuring", 00:18:30.682 "raid_level": "concat", 00:18:30.682 "superblock": false, 00:18:30.682 "num_base_bdevs": 4, 00:18:30.682 "num_base_bdevs_discovered": 0, 00:18:30.682 "num_base_bdevs_operational": 4, 00:18:30.682 "base_bdevs_list": [ 00:18:30.682 { 00:18:30.682 "name": "BaseBdev1", 00:18:30.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.682 "is_configured": false, 00:18:30.682 "data_offset": 0, 00:18:30.682 "data_size": 0 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "name": "BaseBdev2", 00:18:30.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.682 "is_configured": false, 00:18:30.682 "data_offset": 0, 00:18:30.682 "data_size": 0 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "name": "BaseBdev3", 00:18:30.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.682 "is_configured": false, 00:18:30.682 "data_offset": 0, 00:18:30.682 "data_size": 0 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "name": "BaseBdev4", 00:18:30.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.682 "is_configured": false, 00:18:30.682 "data_offset": 0, 00:18:30.682 "data_size": 0 00:18:30.682 } 00:18:30.682 ] 00:18:30.682 }' 00:18:30.682 01:01:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.682 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:30.940 01:01:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:31.199 [2024-11-18 01:01:05.573989] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.199 [2024-11-18 01:01:05.574346] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:31.199 01:01:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:31.458 [2024-11-18 01:01:05.846092] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.458 [2024-11-18 01:01:05.846341] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.458 [2024-11-18 01:01:05.846422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.458 [2024-11-18 01:01:05.846480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.458 [2024-11-18 01:01:05.846543] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.458 [2024-11-18 01:01:05.846592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.458 [2024-11-18 01:01:05.846618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:31.458 [2024-11-18 01:01:05.846664] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:31.717 01:01:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:31.975 [2024-11-18 01:01:06.122277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.975 BaseBdev1 00:18:31.976 01:01:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:31.976 01:01:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:31.976 01:01:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:31.976 01:01:06 -- common/autotest_common.sh@899 -- # local i 00:18:31.976 01:01:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:31.976 01:01:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:31.976 01:01:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.976 01:01:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:32.235 [ 00:18:32.235 { 00:18:32.235 "name": "BaseBdev1", 00:18:32.235 "aliases": [ 00:18:32.235 "74836b85-d814-4371-a1a4-0cf21f0d2333" 00:18:32.235 ], 00:18:32.235 "product_name": "Malloc disk", 00:18:32.235 "block_size": 512, 00:18:32.235 "num_blocks": 65536, 00:18:32.235 "uuid": "74836b85-d814-4371-a1a4-0cf21f0d2333", 00:18:32.235 "assigned_rate_limits": { 00:18:32.235 "rw_ios_per_sec": 0, 00:18:32.235 "rw_mbytes_per_sec": 0, 00:18:32.235 "r_mbytes_per_sec": 0, 00:18:32.235 "w_mbytes_per_sec": 0 00:18:32.235 }, 00:18:32.235 "claimed": true, 00:18:32.235 "claim_type": "exclusive_write", 00:18:32.235 "zoned": false, 00:18:32.235 "supported_io_types": { 00:18:32.235 "read": true, 00:18:32.235 "write": true, 00:18:32.235 "unmap": true, 00:18:32.235 "write_zeroes": true, 00:18:32.235 "flush": true, 00:18:32.235 "reset": true, 00:18:32.235 "compare": false, 00:18:32.235 "compare_and_write": false, 00:18:32.235 "abort": true, 00:18:32.235 "nvme_admin": false, 00:18:32.235 "nvme_io": false 00:18:32.235 }, 00:18:32.235 "memory_domains": [ 00:18:32.235 { 00:18:32.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.235 "dma_device_type": 2 00:18:32.235 } 00:18:32.235 ], 00:18:32.235 "driver_specific": {} 00:18:32.235 } 00:18:32.235 ] 00:18:32.235 01:01:06 -- common/autotest_common.sh@905 -- # return 0 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.235 01:01:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.493 01:01:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.493 "name": "Existed_Raid", 00:18:32.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.493 "strip_size_kb": 64, 00:18:32.493 "state": "configuring", 00:18:32.493 "raid_level": "concat", 00:18:32.493 "superblock": false, 00:18:32.493 "num_base_bdevs": 4, 00:18:32.493 "num_base_bdevs_discovered": 1, 00:18:32.493 "num_base_bdevs_operational": 4, 00:18:32.493 "base_bdevs_list": [ 00:18:32.493 { 00:18:32.493 "name": "BaseBdev1", 00:18:32.494 "uuid": "74836b85-d814-4371-a1a4-0cf21f0d2333", 00:18:32.494 "is_configured": true, 00:18:32.494 "data_offset": 0, 00:18:32.494 "data_size": 65536 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "name": "BaseBdev2", 00:18:32.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.494 "is_configured": false, 00:18:32.494 "data_offset": 0, 00:18:32.494 "data_size": 0 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "name": "BaseBdev3", 00:18:32.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.494 "is_configured": false, 00:18:32.494 "data_offset": 0, 00:18:32.494 "data_size": 0 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "name": "BaseBdev4", 00:18:32.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.494 "is_configured": false, 00:18:32.494 "data_offset": 0, 00:18:32.494 "data_size": 0 00:18:32.494 } 00:18:32.494 ] 00:18:32.494 }' 00:18:32.494 01:01:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.494 01:01:06 -- common/autotest_common.sh@10 -- # set +x 00:18:33.061 01:01:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:33.062 [2024-11-18 01:01:07.442918] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:33.062 [2024-11-18 01:01:07.443260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:33.320 01:01:07 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:33.320 01:01:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:33.320 [2024-11-18 01:01:07.719143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.320 [2024-11-18 01:01:07.721820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.320 [2024-11-18 01:01:07.722062] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.320 [2024-11-18 01:01:07.722182] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:33.320 [2024-11-18 01:01:07.722287] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:33.580 [2024-11-18 01:01:07.722356] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:33.580 [2024-11-18 01:01:07.722423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.580 "name": "Existed_Raid", 00:18:33.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.580 "strip_size_kb": 64, 00:18:33.580 "state": "configuring", 00:18:33.580 "raid_level": "concat", 00:18:33.580 "superblock": false, 00:18:33.580 "num_base_bdevs": 4, 00:18:33.580 "num_base_bdevs_discovered": 1, 00:18:33.580 "num_base_bdevs_operational": 4, 00:18:33.580 "base_bdevs_list": [ 00:18:33.580 { 00:18:33.580 "name": "BaseBdev1", 00:18:33.580 "uuid": "74836b85-d814-4371-a1a4-0cf21f0d2333", 00:18:33.580 "is_configured": true, 00:18:33.580 "data_offset": 0, 00:18:33.580 "data_size": 65536 00:18:33.580 }, 00:18:33.580 { 00:18:33.580 "name": "BaseBdev2", 00:18:33.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.580 "is_configured": false, 00:18:33.580 "data_offset": 0, 00:18:33.580 "data_size": 0 00:18:33.580 }, 00:18:33.580 { 00:18:33.580 "name": "BaseBdev3", 00:18:33.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.580 "is_configured": false, 00:18:33.580 "data_offset": 0, 00:18:33.580 "data_size": 0 00:18:33.580 }, 00:18:33.580 { 00:18:33.580 "name": "BaseBdev4", 00:18:33.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.580 "is_configured": false, 00:18:33.580 "data_offset": 0, 00:18:33.580 "data_size": 0 00:18:33.580 } 00:18:33.580 ] 00:18:33.580 }' 00:18:33.580 01:01:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.580 01:01:07 -- common/autotest_common.sh@10 -- # set +x 00:18:34.216 01:01:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:34.491 [2024-11-18 01:01:08.840245] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.491 BaseBdev2 00:18:34.491 01:01:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:34.491 01:01:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:34.491 01:01:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:34.491 01:01:08 -- common/autotest_common.sh@899 -- # local i 00:18:34.491 01:01:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:34.491 01:01:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:34.491 01:01:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:34.765 01:01:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:35.023 [ 00:18:35.023 { 00:18:35.023 "name": "BaseBdev2", 00:18:35.023 "aliases": [ 00:18:35.023 "a912938c-3072-4a88-bd62-df8b9e818875" 00:18:35.023 ], 00:18:35.023 "product_name": "Malloc disk", 00:18:35.023 "block_size": 512, 00:18:35.023 "num_blocks": 65536, 00:18:35.023 "uuid": "a912938c-3072-4a88-bd62-df8b9e818875", 00:18:35.023 "assigned_rate_limits": { 00:18:35.024 "rw_ios_per_sec": 0, 00:18:35.024 "rw_mbytes_per_sec": 0, 00:18:35.024 "r_mbytes_per_sec": 0, 00:18:35.024 "w_mbytes_per_sec": 0 00:18:35.024 }, 00:18:35.024 "claimed": true, 00:18:35.024 "claim_type": "exclusive_write", 00:18:35.024 "zoned": false, 00:18:35.024 "supported_io_types": { 00:18:35.024 "read": true, 00:18:35.024 "write": true, 00:18:35.024 "unmap": true, 00:18:35.024 "write_zeroes": true, 00:18:35.024 "flush": true, 00:18:35.024 "reset": true, 00:18:35.024 "compare": false, 00:18:35.024 "compare_and_write": false, 00:18:35.024 "abort": true, 00:18:35.024 "nvme_admin": false, 00:18:35.024 "nvme_io": false 00:18:35.024 }, 00:18:35.024 "memory_domains": [ 00:18:35.024 { 00:18:35.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.024 "dma_device_type": 2 00:18:35.024 } 00:18:35.024 ], 00:18:35.024 "driver_specific": {} 00:18:35.024 } 00:18:35.024 ] 00:18:35.024 01:01:09 -- common/autotest_common.sh@905 -- # return 0 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.024 01:01:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.281 01:01:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.281 "name": "Existed_Raid", 00:18:35.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.281 "strip_size_kb": 64, 00:18:35.281 "state": "configuring", 00:18:35.281 "raid_level": "concat", 00:18:35.281 "superblock": false, 00:18:35.281 "num_base_bdevs": 4, 00:18:35.281 "num_base_bdevs_discovered": 2, 00:18:35.281 "num_base_bdevs_operational": 4, 00:18:35.281 "base_bdevs_list": [ 00:18:35.281 { 00:18:35.281 "name": "BaseBdev1", 00:18:35.281 "uuid": "74836b85-d814-4371-a1a4-0cf21f0d2333", 00:18:35.281 "is_configured": true, 00:18:35.281 "data_offset": 0, 00:18:35.281 "data_size": 65536 00:18:35.281 }, 00:18:35.281 { 00:18:35.281 "name": "BaseBdev2", 00:18:35.281 "uuid": "a912938c-3072-4a88-bd62-df8b9e818875", 00:18:35.281 "is_configured": true, 00:18:35.281 "data_offset": 0, 00:18:35.281 "data_size": 65536 00:18:35.281 }, 00:18:35.281 { 00:18:35.281 "name": "BaseBdev3", 00:18:35.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.281 "is_configured": false, 00:18:35.281 "data_offset": 0, 00:18:35.281 "data_size": 0 00:18:35.281 }, 00:18:35.281 { 00:18:35.281 "name": "BaseBdev4", 00:18:35.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.281 "is_configured": false, 00:18:35.281 "data_offset": 0, 00:18:35.281 "data_size": 0 00:18:35.281 } 00:18:35.281 ] 00:18:35.281 }' 00:18:35.281 01:01:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.281 01:01:09 -- common/autotest_common.sh@10 -- # set +x 00:18:35.844 01:01:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:36.103 [2024-11-18 01:01:10.342065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:36.103 BaseBdev3 00:18:36.103 01:01:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:36.103 01:01:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:36.103 01:01:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:36.103 01:01:10 -- common/autotest_common.sh@899 -- # local i 00:18:36.103 01:01:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:36.103 01:01:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:36.103 01:01:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.361 01:01:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:36.620 [ 00:18:36.620 { 00:18:36.620 "name": "BaseBdev3", 00:18:36.620 "aliases": [ 00:18:36.620 "f45a2845-fef0-4e00-950f-67de2b7ca322" 00:18:36.620 ], 00:18:36.620 "product_name": "Malloc disk", 00:18:36.620 "block_size": 512, 00:18:36.620 "num_blocks": 65536, 00:18:36.620 "uuid": "f45a2845-fef0-4e00-950f-67de2b7ca322", 00:18:36.620 "assigned_rate_limits": { 00:18:36.620 "rw_ios_per_sec": 0, 00:18:36.620 "rw_mbytes_per_sec": 0, 00:18:36.620 "r_mbytes_per_sec": 0, 00:18:36.620 "w_mbytes_per_sec": 0 00:18:36.620 }, 00:18:36.620 "claimed": true, 00:18:36.620 "claim_type": "exclusive_write", 00:18:36.620 "zoned": false, 00:18:36.620 "supported_io_types": { 00:18:36.620 "read": true, 00:18:36.620 "write": true, 00:18:36.620 "unmap": true, 00:18:36.620 "write_zeroes": true, 00:18:36.620 "flush": true, 00:18:36.620 "reset": true, 00:18:36.620 "compare": false, 00:18:36.620 "compare_and_write": false, 00:18:36.620 "abort": true, 00:18:36.620 "nvme_admin": false, 00:18:36.620 "nvme_io": false 00:18:36.620 }, 00:18:36.620 "memory_domains": [ 00:18:36.620 { 00:18:36.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.620 "dma_device_type": 2 00:18:36.620 } 00:18:36.620 ], 00:18:36.620 "driver_specific": {} 00:18:36.620 } 00:18:36.620 ] 00:18:36.620 01:01:10 -- common/autotest_common.sh@905 -- # return 0 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.620 01:01:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.879 01:01:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.879 "name": "Existed_Raid", 00:18:36.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.879 "strip_size_kb": 64, 00:18:36.879 "state": "configuring", 00:18:36.879 "raid_level": "concat", 00:18:36.879 "superblock": false, 00:18:36.879 "num_base_bdevs": 4, 00:18:36.879 "num_base_bdevs_discovered": 3, 00:18:36.879 "num_base_bdevs_operational": 4, 00:18:36.879 "base_bdevs_list": [ 00:18:36.879 { 00:18:36.879 "name": "BaseBdev1", 00:18:36.879 "uuid": "74836b85-d814-4371-a1a4-0cf21f0d2333", 00:18:36.879 "is_configured": true, 00:18:36.879 "data_offset": 0, 00:18:36.879 "data_size": 65536 00:18:36.879 }, 00:18:36.879 { 00:18:36.879 "name": "BaseBdev2", 00:18:36.879 "uuid": "a912938c-3072-4a88-bd62-df8b9e818875", 00:18:36.879 "is_configured": true, 00:18:36.879 "data_offset": 0, 00:18:36.879 "data_size": 65536 00:18:36.879 }, 00:18:36.879 { 00:18:36.879 "name": "BaseBdev3", 00:18:36.879 "uuid": "f45a2845-fef0-4e00-950f-67de2b7ca322", 00:18:36.879 "is_configured": true, 00:18:36.879 "data_offset": 0, 00:18:36.879 "data_size": 65536 00:18:36.879 }, 00:18:36.879 { 00:18:36.879 "name": "BaseBdev4", 00:18:36.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.879 "is_configured": false, 00:18:36.879 "data_offset": 0, 00:18:36.879 "data_size": 0 00:18:36.879 } 00:18:36.879 ] 00:18:36.879 }' 00:18:36.879 01:01:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.879 01:01:11 -- common/autotest_common.sh@10 -- # set +x 00:18:37.447 01:01:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:37.705 [2024-11-18 01:01:11.908012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:37.705 [2024-11-18 01:01:11.908353] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:37.705 [2024-11-18 01:01:11.908393] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:37.705 [2024-11-18 01:01:11.908677] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:37.705 [2024-11-18 01:01:11.909211] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:37.705 [2024-11-18 01:01:11.909323] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:37.705 [2024-11-18 01:01:11.909646] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.705 BaseBdev4 00:18:37.705 01:01:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:37.705 01:01:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:37.705 01:01:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:37.705 01:01:11 -- common/autotest_common.sh@899 -- # local i 00:18:37.705 01:01:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:37.705 01:01:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:37.705 01:01:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.965 01:01:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:37.965 [ 00:18:37.965 { 00:18:37.965 "name": "BaseBdev4", 00:18:37.965 "aliases": [ 00:18:37.965 "e0394605-309b-4353-9ac1-c35914669059" 00:18:37.965 ], 00:18:37.965 "product_name": "Malloc disk", 00:18:37.965 "block_size": 512, 00:18:37.965 "num_blocks": 65536, 00:18:37.965 "uuid": "e0394605-309b-4353-9ac1-c35914669059", 00:18:37.965 "assigned_rate_limits": { 00:18:37.965 "rw_ios_per_sec": 0, 00:18:37.965 "rw_mbytes_per_sec": 0, 00:18:37.965 "r_mbytes_per_sec": 0, 00:18:37.965 "w_mbytes_per_sec": 0 00:18:37.965 }, 00:18:37.965 "claimed": true, 00:18:37.965 "claim_type": "exclusive_write", 00:18:37.965 "zoned": false, 00:18:37.965 "supported_io_types": { 00:18:37.965 "read": true, 00:18:37.965 "write": true, 00:18:37.965 "unmap": true, 00:18:37.965 "write_zeroes": true, 00:18:37.965 "flush": true, 00:18:37.965 "reset": true, 00:18:37.965 "compare": false, 00:18:37.965 "compare_and_write": false, 00:18:37.965 "abort": true, 00:18:37.965 "nvme_admin": false, 00:18:37.965 "nvme_io": false 00:18:37.965 }, 00:18:37.965 "memory_domains": [ 00:18:37.965 { 00:18:37.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.965 "dma_device_type": 2 00:18:37.965 } 00:18:37.965 ], 00:18:37.965 "driver_specific": {} 00:18:37.965 } 00:18:37.965 ] 00:18:38.224 01:01:12 -- common/autotest_common.sh@905 -- # return 0 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.224 "name": "Existed_Raid", 00:18:38.224 "uuid": "d8e3844e-2f64-4793-9e1f-00a3e522453d", 00:18:38.224 "strip_size_kb": 64, 00:18:38.224 "state": "online", 00:18:38.224 "raid_level": "concat", 00:18:38.224 "superblock": false, 00:18:38.224 "num_base_bdevs": 4, 00:18:38.224 "num_base_bdevs_discovered": 4, 00:18:38.224 "num_base_bdevs_operational": 4, 00:18:38.224 "base_bdevs_list": [ 00:18:38.224 { 00:18:38.224 "name": "BaseBdev1", 00:18:38.224 "uuid": "74836b85-d814-4371-a1a4-0cf21f0d2333", 00:18:38.224 "is_configured": true, 00:18:38.224 "data_offset": 0, 00:18:38.224 "data_size": 65536 00:18:38.224 }, 00:18:38.224 { 00:18:38.224 "name": "BaseBdev2", 00:18:38.224 "uuid": "a912938c-3072-4a88-bd62-df8b9e818875", 00:18:38.224 "is_configured": true, 00:18:38.224 "data_offset": 0, 00:18:38.224 "data_size": 65536 00:18:38.224 }, 00:18:38.224 { 00:18:38.224 "name": "BaseBdev3", 00:18:38.224 "uuid": "f45a2845-fef0-4e00-950f-67de2b7ca322", 00:18:38.224 "is_configured": true, 00:18:38.224 "data_offset": 0, 00:18:38.224 "data_size": 65536 00:18:38.224 }, 00:18:38.224 { 00:18:38.224 "name": "BaseBdev4", 00:18:38.224 "uuid": "e0394605-309b-4353-9ac1-c35914669059", 00:18:38.224 "is_configured": true, 00:18:38.224 "data_offset": 0, 00:18:38.224 "data_size": 65536 00:18:38.224 } 00:18:38.224 ] 00:18:38.224 }' 00:18:38.224 01:01:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.224 01:01:12 -- common/autotest_common.sh@10 -- # set +x 00:18:38.791 01:01:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:39.050 [2024-11-18 01:01:13.428580] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.050 [2024-11-18 01:01:13.428873] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.050 [2024-11-18 01:01:13.429126] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:39.309 "name": "Existed_Raid", 00:18:39.309 "uuid": "d8e3844e-2f64-4793-9e1f-00a3e522453d", 00:18:39.309 "strip_size_kb": 64, 00:18:39.309 "state": "offline", 00:18:39.309 "raid_level": "concat", 00:18:39.309 "superblock": false, 00:18:39.309 "num_base_bdevs": 4, 00:18:39.309 "num_base_bdevs_discovered": 3, 00:18:39.309 "num_base_bdevs_operational": 3, 00:18:39.309 "base_bdevs_list": [ 00:18:39.309 { 00:18:39.309 "name": null, 00:18:39.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.309 "is_configured": false, 00:18:39.309 "data_offset": 0, 00:18:39.309 "data_size": 65536 00:18:39.309 }, 00:18:39.309 { 00:18:39.309 "name": "BaseBdev2", 00:18:39.309 "uuid": "a912938c-3072-4a88-bd62-df8b9e818875", 00:18:39.309 "is_configured": true, 00:18:39.309 "data_offset": 0, 00:18:39.309 "data_size": 65536 00:18:39.309 }, 00:18:39.309 { 00:18:39.309 "name": "BaseBdev3", 00:18:39.309 "uuid": "f45a2845-fef0-4e00-950f-67de2b7ca322", 00:18:39.309 "is_configured": true, 00:18:39.309 "data_offset": 0, 00:18:39.309 "data_size": 65536 00:18:39.309 }, 00:18:39.309 { 00:18:39.309 "name": "BaseBdev4", 00:18:39.309 "uuid": "e0394605-309b-4353-9ac1-c35914669059", 00:18:39.309 "is_configured": true, 00:18:39.309 "data_offset": 0, 00:18:39.309 "data_size": 65536 00:18:39.309 } 00:18:39.309 ] 00:18:39.309 }' 00:18:39.309 01:01:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:39.309 01:01:13 -- common/autotest_common.sh@10 -- # set +x 00:18:40.246 01:01:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:40.247 01:01:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:40.247 01:01:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.247 01:01:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:40.247 01:01:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:40.247 01:01:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.247 01:01:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:40.505 [2024-11-18 01:01:14.742905] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:40.505 01:01:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:40.505 01:01:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:40.505 01:01:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:40.505 01:01:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.764 01:01:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:40.764 01:01:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.764 01:01:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:41.023 [2024-11-18 01:01:15.226996] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:41.023 01:01:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:41.023 01:01:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:41.023 01:01:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.023 01:01:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:41.282 01:01:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:41.282 01:01:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.282 01:01:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:41.541 [2024-11-18 01:01:15.718648] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:41.541 [2024-11-18 01:01:15.718981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:41.541 01:01:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:41.541 01:01:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:41.541 01:01:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.541 01:01:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:41.799 01:01:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:41.799 01:01:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:41.799 01:01:15 -- bdev/bdev_raid.sh@287 -- # killprocess 130354 00:18:41.799 01:01:15 -- common/autotest_common.sh@936 -- # '[' -z 130354 ']' 00:18:41.799 01:01:15 -- common/autotest_common.sh@940 -- # kill -0 130354 00:18:41.799 01:01:15 -- common/autotest_common.sh@941 -- # uname 00:18:41.799 01:01:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.799 01:01:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130354 00:18:41.799 killing process with pid 130354 00:18:41.799 01:01:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:41.799 01:01:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:41.799 01:01:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130354' 00:18:41.799 01:01:15 -- common/autotest_common.sh@955 -- # kill 130354 00:18:41.799 01:01:15 -- common/autotest_common.sh@960 -- # wait 130354 00:18:41.799 [2024-11-18 01:01:15.986452] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.799 [2024-11-18 01:01:15.986567] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:42.058 00:18:42.058 real 0m13.023s 00:18:42.058 user 0m23.093s 00:18:42.058 sys 0m2.323s 00:18:42.058 01:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:42.058 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:42.058 ************************************ 00:18:42.058 END TEST raid_state_function_test 00:18:42.058 ************************************ 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:42.058 01:01:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:42.058 01:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.058 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:42.058 ************************************ 00:18:42.058 START TEST raid_state_function_test_sb 00:18:42.058 ************************************ 00:18:42.058 01:01:16 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:42.058 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=130780 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130780' 00:18:42.317 Process raid pid: 130780 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130780 /var/tmp/spdk-raid.sock 00:18:42.317 01:01:16 -- common/autotest_common.sh@829 -- # '[' -z 130780 ']' 00:18:42.317 01:01:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:42.317 01:01:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:42.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:42.317 01:01:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.317 01:01:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:42.317 01:01:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.317 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:42.317 [2024-11-18 01:01:16.519295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:42.317 [2024-11-18 01:01:16.519495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.317 [2024-11-18 01:01:16.662682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.576 [2024-11-18 01:01:16.743878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.576 [2024-11-18 01:01:16.822632] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.143 01:01:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.143 01:01:17 -- common/autotest_common.sh@862 -- # return 0 00:18:43.143 01:01:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:43.401 [2024-11-18 01:01:17.652455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.401 [2024-11-18 01:01:17.652562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.401 [2024-11-18 01:01:17.652574] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.401 [2024-11-18 01:01:17.652595] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.401 [2024-11-18 01:01:17.652602] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:43.401 [2024-11-18 01:01:17.652656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:43.401 [2024-11-18 01:01:17.652663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:43.401 [2024-11-18 01:01:17.652692] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.401 01:01:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.402 01:01:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.402 01:01:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.660 01:01:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.660 "name": "Existed_Raid", 00:18:43.660 "uuid": "54644f78-6c5f-4c66-b453-d1a1999401b0", 00:18:43.660 "strip_size_kb": 64, 00:18:43.660 "state": "configuring", 00:18:43.660 "raid_level": "concat", 00:18:43.660 "superblock": true, 00:18:43.660 "num_base_bdevs": 4, 00:18:43.660 "num_base_bdevs_discovered": 0, 00:18:43.660 "num_base_bdevs_operational": 4, 00:18:43.660 "base_bdevs_list": [ 00:18:43.660 { 00:18:43.660 "name": "BaseBdev1", 00:18:43.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.660 "is_configured": false, 00:18:43.660 "data_offset": 0, 00:18:43.660 "data_size": 0 00:18:43.660 }, 00:18:43.660 { 00:18:43.660 "name": "BaseBdev2", 00:18:43.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.660 "is_configured": false, 00:18:43.660 "data_offset": 0, 00:18:43.660 "data_size": 0 00:18:43.660 }, 00:18:43.660 { 00:18:43.660 "name": "BaseBdev3", 00:18:43.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.660 "is_configured": false, 00:18:43.660 "data_offset": 0, 00:18:43.660 "data_size": 0 00:18:43.660 }, 00:18:43.660 { 00:18:43.660 "name": "BaseBdev4", 00:18:43.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.660 "is_configured": false, 00:18:43.660 "data_offset": 0, 00:18:43.660 "data_size": 0 00:18:43.660 } 00:18:43.660 ] 00:18:43.660 }' 00:18:43.660 01:01:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.660 01:01:17 -- common/autotest_common.sh@10 -- # set +x 00:18:44.229 01:01:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.229 [2024-11-18 01:01:18.580384] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.229 [2024-11-18 01:01:18.580439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:44.229 01:01:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:44.488 [2024-11-18 01:01:18.824531] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.488 [2024-11-18 01:01:18.824619] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.488 [2024-11-18 01:01:18.824631] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.488 [2024-11-18 01:01:18.824658] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.488 [2024-11-18 01:01:18.824666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:44.488 [2024-11-18 01:01:18.824684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:44.488 [2024-11-18 01:01:18.824691] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:44.488 [2024-11-18 01:01:18.824718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:44.488 01:01:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:44.747 [2024-11-18 01:01:19.132742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.747 BaseBdev1 00:18:45.005 01:01:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:45.005 01:01:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:45.005 01:01:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:45.005 01:01:19 -- common/autotest_common.sh@899 -- # local i 00:18:45.005 01:01:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:45.006 01:01:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:45.006 01:01:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:45.264 01:01:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:45.264 [ 00:18:45.264 { 00:18:45.264 "name": "BaseBdev1", 00:18:45.264 "aliases": [ 00:18:45.264 "a24a1482-cbc1-4ed5-982d-4684bc24dc9b" 00:18:45.264 ], 00:18:45.264 "product_name": "Malloc disk", 00:18:45.264 "block_size": 512, 00:18:45.264 "num_blocks": 65536, 00:18:45.264 "uuid": "a24a1482-cbc1-4ed5-982d-4684bc24dc9b", 00:18:45.264 "assigned_rate_limits": { 00:18:45.264 "rw_ios_per_sec": 0, 00:18:45.264 "rw_mbytes_per_sec": 0, 00:18:45.264 "r_mbytes_per_sec": 0, 00:18:45.264 "w_mbytes_per_sec": 0 00:18:45.264 }, 00:18:45.264 "claimed": true, 00:18:45.264 "claim_type": "exclusive_write", 00:18:45.264 "zoned": false, 00:18:45.264 "supported_io_types": { 00:18:45.264 "read": true, 00:18:45.264 "write": true, 00:18:45.265 "unmap": true, 00:18:45.265 "write_zeroes": true, 00:18:45.265 "flush": true, 00:18:45.265 "reset": true, 00:18:45.265 "compare": false, 00:18:45.265 "compare_and_write": false, 00:18:45.265 "abort": true, 00:18:45.265 "nvme_admin": false, 00:18:45.265 "nvme_io": false 00:18:45.265 }, 00:18:45.265 "memory_domains": [ 00:18:45.265 { 00:18:45.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.265 "dma_device_type": 2 00:18:45.265 } 00:18:45.265 ], 00:18:45.265 "driver_specific": {} 00:18:45.265 } 00:18:45.265 ] 00:18:45.265 01:01:19 -- common/autotest_common.sh@905 -- # return 0 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.265 01:01:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.524 01:01:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.524 "name": "Existed_Raid", 00:18:45.524 "uuid": "28ae0f9f-7f92-4293-8b11-58e52a434828", 00:18:45.524 "strip_size_kb": 64, 00:18:45.524 "state": "configuring", 00:18:45.524 "raid_level": "concat", 00:18:45.524 "superblock": true, 00:18:45.524 "num_base_bdevs": 4, 00:18:45.524 "num_base_bdevs_discovered": 1, 00:18:45.524 "num_base_bdevs_operational": 4, 00:18:45.524 "base_bdevs_list": [ 00:18:45.524 { 00:18:45.524 "name": "BaseBdev1", 00:18:45.524 "uuid": "a24a1482-cbc1-4ed5-982d-4684bc24dc9b", 00:18:45.524 "is_configured": true, 00:18:45.524 "data_offset": 2048, 00:18:45.524 "data_size": 63488 00:18:45.524 }, 00:18:45.524 { 00:18:45.524 "name": "BaseBdev2", 00:18:45.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.524 "is_configured": false, 00:18:45.524 "data_offset": 0, 00:18:45.524 "data_size": 0 00:18:45.524 }, 00:18:45.524 { 00:18:45.524 "name": "BaseBdev3", 00:18:45.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.524 "is_configured": false, 00:18:45.524 "data_offset": 0, 00:18:45.524 "data_size": 0 00:18:45.524 }, 00:18:45.524 { 00:18:45.524 "name": "BaseBdev4", 00:18:45.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.524 "is_configured": false, 00:18:45.524 "data_offset": 0, 00:18:45.524 "data_size": 0 00:18:45.524 } 00:18:45.524 ] 00:18:45.524 }' 00:18:45.524 01:01:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.524 01:01:19 -- common/autotest_common.sh@10 -- # set +x 00:18:46.091 01:01:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:46.350 [2024-11-18 01:01:20.697077] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:46.350 [2024-11-18 01:01:20.697172] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:46.350 01:01:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:46.350 01:01:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:46.608 01:01:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:46.867 BaseBdev1 00:18:46.867 01:01:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:46.867 01:01:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:46.867 01:01:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:46.867 01:01:21 -- common/autotest_common.sh@899 -- # local i 00:18:46.867 01:01:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:46.867 01:01:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:46.867 01:01:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.125 01:01:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:47.384 [ 00:18:47.384 { 00:18:47.384 "name": "BaseBdev1", 00:18:47.384 "aliases": [ 00:18:47.384 "e650694c-30f5-4331-8fc2-957ed98436cb" 00:18:47.384 ], 00:18:47.384 "product_name": "Malloc disk", 00:18:47.384 "block_size": 512, 00:18:47.384 "num_blocks": 65536, 00:18:47.384 "uuid": "e650694c-30f5-4331-8fc2-957ed98436cb", 00:18:47.384 "assigned_rate_limits": { 00:18:47.384 "rw_ios_per_sec": 0, 00:18:47.384 "rw_mbytes_per_sec": 0, 00:18:47.384 "r_mbytes_per_sec": 0, 00:18:47.384 "w_mbytes_per_sec": 0 00:18:47.384 }, 00:18:47.384 "claimed": false, 00:18:47.384 "zoned": false, 00:18:47.384 "supported_io_types": { 00:18:47.384 "read": true, 00:18:47.384 "write": true, 00:18:47.384 "unmap": true, 00:18:47.384 "write_zeroes": true, 00:18:47.384 "flush": true, 00:18:47.384 "reset": true, 00:18:47.384 "compare": false, 00:18:47.384 "compare_and_write": false, 00:18:47.384 "abort": true, 00:18:47.384 "nvme_admin": false, 00:18:47.384 "nvme_io": false 00:18:47.384 }, 00:18:47.384 "memory_domains": [ 00:18:47.384 { 00:18:47.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.384 "dma_device_type": 2 00:18:47.384 } 00:18:47.384 ], 00:18:47.384 "driver_specific": {} 00:18:47.384 } 00:18:47.384 ] 00:18:47.384 01:01:21 -- common/autotest_common.sh@905 -- # return 0 00:18:47.384 01:01:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:47.643 [2024-11-18 01:01:21.902825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.643 [2024-11-18 01:01:21.905304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.643 [2024-11-18 01:01:21.905396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.643 [2024-11-18 01:01:21.905407] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.643 [2024-11-18 01:01:21.905433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.643 [2024-11-18 01:01:21.905441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.643 [2024-11-18 01:01:21.905458] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.643 01:01:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.902 01:01:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.902 "name": "Existed_Raid", 00:18:47.902 "uuid": "0812942a-25aa-4c37-993d-966acd8ff69a", 00:18:47.902 "strip_size_kb": 64, 00:18:47.902 "state": "configuring", 00:18:47.902 "raid_level": "concat", 00:18:47.902 "superblock": true, 00:18:47.902 "num_base_bdevs": 4, 00:18:47.902 "num_base_bdevs_discovered": 1, 00:18:47.902 "num_base_bdevs_operational": 4, 00:18:47.902 "base_bdevs_list": [ 00:18:47.902 { 00:18:47.902 "name": "BaseBdev1", 00:18:47.902 "uuid": "e650694c-30f5-4331-8fc2-957ed98436cb", 00:18:47.902 "is_configured": true, 00:18:47.902 "data_offset": 2048, 00:18:47.902 "data_size": 63488 00:18:47.902 }, 00:18:47.902 { 00:18:47.902 "name": "BaseBdev2", 00:18:47.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.902 "is_configured": false, 00:18:47.902 "data_offset": 0, 00:18:47.902 "data_size": 0 00:18:47.902 }, 00:18:47.902 { 00:18:47.902 "name": "BaseBdev3", 00:18:47.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.902 "is_configured": false, 00:18:47.902 "data_offset": 0, 00:18:47.902 "data_size": 0 00:18:47.902 }, 00:18:47.902 { 00:18:47.902 "name": "BaseBdev4", 00:18:47.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.902 "is_configured": false, 00:18:47.902 "data_offset": 0, 00:18:47.902 "data_size": 0 00:18:47.902 } 00:18:47.902 ] 00:18:47.902 }' 00:18:47.902 01:01:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.902 01:01:22 -- common/autotest_common.sh@10 -- # set +x 00:18:48.470 01:01:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:48.728 [2024-11-18 01:01:23.016359] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.728 BaseBdev2 00:18:48.728 01:01:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:48.728 01:01:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:48.728 01:01:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:48.728 01:01:23 -- common/autotest_common.sh@899 -- # local i 00:18:48.728 01:01:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:48.728 01:01:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:48.728 01:01:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:48.987 01:01:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:49.246 [ 00:18:49.246 { 00:18:49.246 "name": "BaseBdev2", 00:18:49.246 "aliases": [ 00:18:49.246 "49393723-bdbd-487f-9cf4-68477628b8b2" 00:18:49.246 ], 00:18:49.246 "product_name": "Malloc disk", 00:18:49.246 "block_size": 512, 00:18:49.246 "num_blocks": 65536, 00:18:49.246 "uuid": "49393723-bdbd-487f-9cf4-68477628b8b2", 00:18:49.246 "assigned_rate_limits": { 00:18:49.246 "rw_ios_per_sec": 0, 00:18:49.246 "rw_mbytes_per_sec": 0, 00:18:49.246 "r_mbytes_per_sec": 0, 00:18:49.246 "w_mbytes_per_sec": 0 00:18:49.246 }, 00:18:49.246 "claimed": true, 00:18:49.246 "claim_type": "exclusive_write", 00:18:49.246 "zoned": false, 00:18:49.246 "supported_io_types": { 00:18:49.246 "read": true, 00:18:49.246 "write": true, 00:18:49.246 "unmap": true, 00:18:49.246 "write_zeroes": true, 00:18:49.246 "flush": true, 00:18:49.246 "reset": true, 00:18:49.246 "compare": false, 00:18:49.246 "compare_and_write": false, 00:18:49.246 "abort": true, 00:18:49.246 "nvme_admin": false, 00:18:49.246 "nvme_io": false 00:18:49.246 }, 00:18:49.246 "memory_domains": [ 00:18:49.246 { 00:18:49.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.246 "dma_device_type": 2 00:18:49.246 } 00:18:49.246 ], 00:18:49.246 "driver_specific": {} 00:18:49.246 } 00:18:49.246 ] 00:18:49.246 01:01:23 -- common/autotest_common.sh@905 -- # return 0 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.246 01:01:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.505 01:01:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.505 "name": "Existed_Raid", 00:18:49.505 "uuid": "0812942a-25aa-4c37-993d-966acd8ff69a", 00:18:49.505 "strip_size_kb": 64, 00:18:49.505 "state": "configuring", 00:18:49.505 "raid_level": "concat", 00:18:49.505 "superblock": true, 00:18:49.505 "num_base_bdevs": 4, 00:18:49.505 "num_base_bdevs_discovered": 2, 00:18:49.505 "num_base_bdevs_operational": 4, 00:18:49.505 "base_bdevs_list": [ 00:18:49.505 { 00:18:49.505 "name": "BaseBdev1", 00:18:49.505 "uuid": "e650694c-30f5-4331-8fc2-957ed98436cb", 00:18:49.505 "is_configured": true, 00:18:49.505 "data_offset": 2048, 00:18:49.505 "data_size": 63488 00:18:49.505 }, 00:18:49.505 { 00:18:49.505 "name": "BaseBdev2", 00:18:49.505 "uuid": "49393723-bdbd-487f-9cf4-68477628b8b2", 00:18:49.505 "is_configured": true, 00:18:49.505 "data_offset": 2048, 00:18:49.505 "data_size": 63488 00:18:49.505 }, 00:18:49.505 { 00:18:49.505 "name": "BaseBdev3", 00:18:49.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.505 "is_configured": false, 00:18:49.505 "data_offset": 0, 00:18:49.505 "data_size": 0 00:18:49.505 }, 00:18:49.505 { 00:18:49.505 "name": "BaseBdev4", 00:18:49.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.505 "is_configured": false, 00:18:49.505 "data_offset": 0, 00:18:49.505 "data_size": 0 00:18:49.505 } 00:18:49.505 ] 00:18:49.505 }' 00:18:49.505 01:01:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.505 01:01:23 -- common/autotest_common.sh@10 -- # set +x 00:18:50.071 01:01:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:50.329 [2024-11-18 01:01:24.594342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.329 BaseBdev3 00:18:50.329 01:01:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:50.329 01:01:24 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:50.329 01:01:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:50.329 01:01:24 -- common/autotest_common.sh@899 -- # local i 00:18:50.329 01:01:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:50.329 01:01:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:50.329 01:01:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:50.587 01:01:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:50.846 [ 00:18:50.846 { 00:18:50.846 "name": "BaseBdev3", 00:18:50.846 "aliases": [ 00:18:50.846 "92756263-b851-4be2-99e9-17804fe199d1" 00:18:50.846 ], 00:18:50.846 "product_name": "Malloc disk", 00:18:50.846 "block_size": 512, 00:18:50.846 "num_blocks": 65536, 00:18:50.846 "uuid": "92756263-b851-4be2-99e9-17804fe199d1", 00:18:50.846 "assigned_rate_limits": { 00:18:50.846 "rw_ios_per_sec": 0, 00:18:50.846 "rw_mbytes_per_sec": 0, 00:18:50.846 "r_mbytes_per_sec": 0, 00:18:50.846 "w_mbytes_per_sec": 0 00:18:50.846 }, 00:18:50.846 "claimed": true, 00:18:50.846 "claim_type": "exclusive_write", 00:18:50.846 "zoned": false, 00:18:50.846 "supported_io_types": { 00:18:50.846 "read": true, 00:18:50.846 "write": true, 00:18:50.846 "unmap": true, 00:18:50.846 "write_zeroes": true, 00:18:50.846 "flush": true, 00:18:50.846 "reset": true, 00:18:50.846 "compare": false, 00:18:50.846 "compare_and_write": false, 00:18:50.846 "abort": true, 00:18:50.846 "nvme_admin": false, 00:18:50.846 "nvme_io": false 00:18:50.846 }, 00:18:50.846 "memory_domains": [ 00:18:50.846 { 00:18:50.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.846 "dma_device_type": 2 00:18:50.846 } 00:18:50.846 ], 00:18:50.846 "driver_specific": {} 00:18:50.846 } 00:18:50.846 ] 00:18:50.847 01:01:25 -- common/autotest_common.sh@905 -- # return 0 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.847 01:01:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.105 01:01:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.105 "name": "Existed_Raid", 00:18:51.105 "uuid": "0812942a-25aa-4c37-993d-966acd8ff69a", 00:18:51.105 "strip_size_kb": 64, 00:18:51.105 "state": "configuring", 00:18:51.105 "raid_level": "concat", 00:18:51.105 "superblock": true, 00:18:51.105 "num_base_bdevs": 4, 00:18:51.105 "num_base_bdevs_discovered": 3, 00:18:51.105 "num_base_bdevs_operational": 4, 00:18:51.105 "base_bdevs_list": [ 00:18:51.105 { 00:18:51.105 "name": "BaseBdev1", 00:18:51.105 "uuid": "e650694c-30f5-4331-8fc2-957ed98436cb", 00:18:51.105 "is_configured": true, 00:18:51.105 "data_offset": 2048, 00:18:51.105 "data_size": 63488 00:18:51.105 }, 00:18:51.105 { 00:18:51.105 "name": "BaseBdev2", 00:18:51.105 "uuid": "49393723-bdbd-487f-9cf4-68477628b8b2", 00:18:51.105 "is_configured": true, 00:18:51.105 "data_offset": 2048, 00:18:51.105 "data_size": 63488 00:18:51.105 }, 00:18:51.105 { 00:18:51.105 "name": "BaseBdev3", 00:18:51.105 "uuid": "92756263-b851-4be2-99e9-17804fe199d1", 00:18:51.105 "is_configured": true, 00:18:51.105 "data_offset": 2048, 00:18:51.105 "data_size": 63488 00:18:51.105 }, 00:18:51.105 { 00:18:51.105 "name": "BaseBdev4", 00:18:51.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.105 "is_configured": false, 00:18:51.105 "data_offset": 0, 00:18:51.105 "data_size": 0 00:18:51.105 } 00:18:51.105 ] 00:18:51.105 }' 00:18:51.105 01:01:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.105 01:01:25 -- common/autotest_common.sh@10 -- # set +x 00:18:51.672 01:01:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:51.930 [2024-11-18 01:01:26.090146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:51.930 [2024-11-18 01:01:26.090406] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:18:51.930 [2024-11-18 01:01:26.090420] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:51.930 [2024-11-18 01:01:26.090575] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:18:51.930 [2024-11-18 01:01:26.090965] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:18:51.930 [2024-11-18 01:01:26.090975] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:18:51.930 [2024-11-18 01:01:26.091125] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.930 BaseBdev4 00:18:51.930 01:01:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:51.930 01:01:26 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:51.930 01:01:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:51.930 01:01:26 -- common/autotest_common.sh@899 -- # local i 00:18:51.930 01:01:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:51.930 01:01:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:51.930 01:01:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.213 01:01:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:52.517 [ 00:18:52.517 { 00:18:52.517 "name": "BaseBdev4", 00:18:52.517 "aliases": [ 00:18:52.517 "63eaf60b-bae7-46c7-aad9-598b2aadb32c" 00:18:52.517 ], 00:18:52.517 "product_name": "Malloc disk", 00:18:52.517 "block_size": 512, 00:18:52.517 "num_blocks": 65536, 00:18:52.517 "uuid": "63eaf60b-bae7-46c7-aad9-598b2aadb32c", 00:18:52.517 "assigned_rate_limits": { 00:18:52.517 "rw_ios_per_sec": 0, 00:18:52.517 "rw_mbytes_per_sec": 0, 00:18:52.517 "r_mbytes_per_sec": 0, 00:18:52.517 "w_mbytes_per_sec": 0 00:18:52.517 }, 00:18:52.517 "claimed": true, 00:18:52.517 "claim_type": "exclusive_write", 00:18:52.517 "zoned": false, 00:18:52.517 "supported_io_types": { 00:18:52.517 "read": true, 00:18:52.517 "write": true, 00:18:52.517 "unmap": true, 00:18:52.517 "write_zeroes": true, 00:18:52.517 "flush": true, 00:18:52.517 "reset": true, 00:18:52.517 "compare": false, 00:18:52.517 "compare_and_write": false, 00:18:52.517 "abort": true, 00:18:52.517 "nvme_admin": false, 00:18:52.517 "nvme_io": false 00:18:52.517 }, 00:18:52.517 "memory_domains": [ 00:18:52.517 { 00:18:52.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.517 "dma_device_type": 2 00:18:52.517 } 00:18:52.517 ], 00:18:52.517 "driver_specific": {} 00:18:52.517 } 00:18:52.517 ] 00:18:52.517 01:01:26 -- common/autotest_common.sh@905 -- # return 0 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.517 01:01:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.517 "name": "Existed_Raid", 00:18:52.517 "uuid": "0812942a-25aa-4c37-993d-966acd8ff69a", 00:18:52.517 "strip_size_kb": 64, 00:18:52.517 "state": "online", 00:18:52.517 "raid_level": "concat", 00:18:52.517 "superblock": true, 00:18:52.517 "num_base_bdevs": 4, 00:18:52.517 "num_base_bdevs_discovered": 4, 00:18:52.517 "num_base_bdevs_operational": 4, 00:18:52.517 "base_bdevs_list": [ 00:18:52.517 { 00:18:52.517 "name": "BaseBdev1", 00:18:52.517 "uuid": "e650694c-30f5-4331-8fc2-957ed98436cb", 00:18:52.517 "is_configured": true, 00:18:52.517 "data_offset": 2048, 00:18:52.517 "data_size": 63488 00:18:52.517 }, 00:18:52.517 { 00:18:52.518 "name": "BaseBdev2", 00:18:52.518 "uuid": "49393723-bdbd-487f-9cf4-68477628b8b2", 00:18:52.518 "is_configured": true, 00:18:52.518 "data_offset": 2048, 00:18:52.518 "data_size": 63488 00:18:52.518 }, 00:18:52.518 { 00:18:52.518 "name": "BaseBdev3", 00:18:52.518 "uuid": "92756263-b851-4be2-99e9-17804fe199d1", 00:18:52.518 "is_configured": true, 00:18:52.518 "data_offset": 2048, 00:18:52.518 "data_size": 63488 00:18:52.518 }, 00:18:52.518 { 00:18:52.518 "name": "BaseBdev4", 00:18:52.518 "uuid": "63eaf60b-bae7-46c7-aad9-598b2aadb32c", 00:18:52.518 "is_configured": true, 00:18:52.518 "data_offset": 2048, 00:18:52.518 "data_size": 63488 00:18:52.518 } 00:18:52.518 ] 00:18:52.518 }' 00:18:52.518 01:01:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.518 01:01:26 -- common/autotest_common.sh@10 -- # set +x 00:18:53.104 01:01:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:53.363 [2024-11-18 01:01:27.630537] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:53.363 [2024-11-18 01:01:27.630589] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.363 [2024-11-18 01:01:27.630663] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.363 01:01:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.622 01:01:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.622 "name": "Existed_Raid", 00:18:53.622 "uuid": "0812942a-25aa-4c37-993d-966acd8ff69a", 00:18:53.622 "strip_size_kb": 64, 00:18:53.622 "state": "offline", 00:18:53.622 "raid_level": "concat", 00:18:53.622 "superblock": true, 00:18:53.622 "num_base_bdevs": 4, 00:18:53.622 "num_base_bdevs_discovered": 3, 00:18:53.622 "num_base_bdevs_operational": 3, 00:18:53.622 "base_bdevs_list": [ 00:18:53.622 { 00:18:53.622 "name": null, 00:18:53.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.622 "is_configured": false, 00:18:53.622 "data_offset": 2048, 00:18:53.622 "data_size": 63488 00:18:53.622 }, 00:18:53.622 { 00:18:53.622 "name": "BaseBdev2", 00:18:53.622 "uuid": "49393723-bdbd-487f-9cf4-68477628b8b2", 00:18:53.622 "is_configured": true, 00:18:53.622 "data_offset": 2048, 00:18:53.622 "data_size": 63488 00:18:53.622 }, 00:18:53.622 { 00:18:53.622 "name": "BaseBdev3", 00:18:53.622 "uuid": "92756263-b851-4be2-99e9-17804fe199d1", 00:18:53.622 "is_configured": true, 00:18:53.622 "data_offset": 2048, 00:18:53.622 "data_size": 63488 00:18:53.622 }, 00:18:53.622 { 00:18:53.622 "name": "BaseBdev4", 00:18:53.622 "uuid": "63eaf60b-bae7-46c7-aad9-598b2aadb32c", 00:18:53.622 "is_configured": true, 00:18:53.622 "data_offset": 2048, 00:18:53.622 "data_size": 63488 00:18:53.622 } 00:18:53.622 ] 00:18:53.622 }' 00:18:53.622 01:01:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.622 01:01:27 -- common/autotest_common.sh@10 -- # set +x 00:18:54.191 01:01:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:54.191 01:01:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:54.191 01:01:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.191 01:01:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:54.450 01:01:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:54.450 01:01:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.450 01:01:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:54.708 [2024-11-18 01:01:28.918540] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:54.708 01:01:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:54.708 01:01:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:54.708 01:01:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.708 01:01:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:54.968 01:01:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:54.968 01:01:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.968 01:01:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:55.228 [2024-11-18 01:01:29.391793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:55.228 01:01:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:55.228 01:01:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:55.228 01:01:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.228 01:01:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:55.487 01:01:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:55.487 01:01:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.487 01:01:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:55.744 [2024-11-18 01:01:29.906703] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:55.744 [2024-11-18 01:01:29.906796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:18:55.744 01:01:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:55.744 01:01:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:55.744 01:01:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.744 01:01:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:56.002 01:01:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:56.002 01:01:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:56.002 01:01:30 -- bdev/bdev_raid.sh@287 -- # killprocess 130780 00:18:56.002 01:01:30 -- common/autotest_common.sh@936 -- # '[' -z 130780 ']' 00:18:56.002 01:01:30 -- common/autotest_common.sh@940 -- # kill -0 130780 00:18:56.002 01:01:30 -- common/autotest_common.sh@941 -- # uname 00:18:56.002 01:01:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:56.002 01:01:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130780 00:18:56.002 killing process with pid 130780 00:18:56.002 01:01:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:56.002 01:01:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:56.002 01:01:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130780' 00:18:56.002 01:01:30 -- common/autotest_common.sh@955 -- # kill 130780 00:18:56.002 01:01:30 -- common/autotest_common.sh@960 -- # wait 130780 00:18:56.002 [2024-11-18 01:01:30.228264] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.002 [2024-11-18 01:01:30.228365] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.259 01:01:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:56.259 00:18:56.259 real 0m14.176s 00:18:56.259 user 0m25.244s 00:18:56.259 sys 0m2.395s 00:18:56.259 01:01:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:56.259 01:01:30 -- common/autotest_common.sh@10 -- # set +x 00:18:56.259 ************************************ 00:18:56.259 END TEST raid_state_function_test_sb 00:18:56.259 ************************************ 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:56.518 01:01:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:56.518 01:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.518 01:01:30 -- common/autotest_common.sh@10 -- # set +x 00:18:56.518 ************************************ 00:18:56.518 START TEST raid_superblock_test 00:18:56.518 ************************************ 00:18:56.518 01:01:30 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@357 -- # raid_pid=131221 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131221 /var/tmp/spdk-raid.sock 00:18:56.518 01:01:30 -- common/autotest_common.sh@829 -- # '[' -z 131221 ']' 00:18:56.518 01:01:30 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:56.518 01:01:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:56.518 01:01:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.518 01:01:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:56.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:56.518 01:01:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.518 01:01:30 -- common/autotest_common.sh@10 -- # set +x 00:18:56.518 [2024-11-18 01:01:30.775084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:56.518 [2024-11-18 01:01:30.775473] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131221 ] 00:18:56.777 [2024-11-18 01:01:30.932084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.777 [2024-11-18 01:01:31.015283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.777 [2024-11-18 01:01:31.096132] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.344 01:01:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.344 01:01:31 -- common/autotest_common.sh@862 -- # return 0 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.344 01:01:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:57.603 malloc1 00:18:57.603 01:01:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:57.861 [2024-11-18 01:01:32.110352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:57.861 [2024-11-18 01:01:32.110491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.861 [2024-11-18 01:01:32.110550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:57.861 [2024-11-18 01:01:32.110625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.861 [2024-11-18 01:01:32.113607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.861 [2024-11-18 01:01:32.113674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:57.861 pt1 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.861 01:01:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:58.120 malloc2 00:18:58.120 01:01:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.379 [2024-11-18 01:01:32.582318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.379 [2024-11-18 01:01:32.582437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.379 [2024-11-18 01:01:32.582481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:58.379 [2024-11-18 01:01:32.582531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.379 [2024-11-18 01:01:32.585396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.379 [2024-11-18 01:01:32.585459] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.379 pt2 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.379 01:01:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:58.637 malloc3 00:18:58.637 01:01:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:58.637 [2024-11-18 01:01:33.010431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:58.637 [2024-11-18 01:01:33.010558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.637 [2024-11-18 01:01:33.010604] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:58.637 [2024-11-18 01:01:33.010651] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.637 [2024-11-18 01:01:33.013573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.637 [2024-11-18 01:01:33.013640] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:58.637 pt3 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.637 01:01:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:58.896 malloc4 00:18:58.896 01:01:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:59.154 [2024-11-18 01:01:33.422202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:59.154 [2024-11-18 01:01:33.422342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.154 [2024-11-18 01:01:33.422381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:59.154 [2024-11-18 01:01:33.422439] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.154 [2024-11-18 01:01:33.425297] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.154 [2024-11-18 01:01:33.425374] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:59.154 pt4 00:18:59.154 01:01:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:59.154 01:01:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:59.154 01:01:33 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:59.414 [2024-11-18 01:01:33.622366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.414 [2024-11-18 01:01:33.624862] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:59.414 [2024-11-18 01:01:33.624933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:59.414 [2024-11-18 01:01:33.624973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:59.414 [2024-11-18 01:01:33.625198] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:59.414 [2024-11-18 01:01:33.625209] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:59.414 [2024-11-18 01:01:33.625395] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:59.414 [2024-11-18 01:01:33.625855] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:59.414 [2024-11-18 01:01:33.625874] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:59.414 [2024-11-18 01:01:33.626027] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.414 01:01:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.673 01:01:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.673 "name": "raid_bdev1", 00:18:59.673 "uuid": "d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00", 00:18:59.673 "strip_size_kb": 64, 00:18:59.673 "state": "online", 00:18:59.673 "raid_level": "concat", 00:18:59.673 "superblock": true, 00:18:59.673 "num_base_bdevs": 4, 00:18:59.673 "num_base_bdevs_discovered": 4, 00:18:59.673 "num_base_bdevs_operational": 4, 00:18:59.673 "base_bdevs_list": [ 00:18:59.673 { 00:18:59.673 "name": "pt1", 00:18:59.673 "uuid": "c061271b-d211-518d-a003-a779683d2b7c", 00:18:59.673 "is_configured": true, 00:18:59.673 "data_offset": 2048, 00:18:59.673 "data_size": 63488 00:18:59.673 }, 00:18:59.673 { 00:18:59.673 "name": "pt2", 00:18:59.673 "uuid": "50bf7005-9b31-58ea-9978-a6f4f9125c8b", 00:18:59.673 "is_configured": true, 00:18:59.673 "data_offset": 2048, 00:18:59.673 "data_size": 63488 00:18:59.673 }, 00:18:59.673 { 00:18:59.673 "name": "pt3", 00:18:59.673 "uuid": "8b39ac6c-a102-54aa-a104-9b585ba38bca", 00:18:59.673 "is_configured": true, 00:18:59.673 "data_offset": 2048, 00:18:59.673 "data_size": 63488 00:18:59.673 }, 00:18:59.673 { 00:18:59.673 "name": "pt4", 00:18:59.673 "uuid": "82f0e4c5-f2ab-5118-84e3-2ec60b064f72", 00:18:59.673 "is_configured": true, 00:18:59.673 "data_offset": 2048, 00:18:59.673 "data_size": 63488 00:18:59.673 } 00:18:59.673 ] 00:18:59.673 }' 00:18:59.673 01:01:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.673 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:19:00.239 01:01:34 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:00.239 01:01:34 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:00.498 [2024-11-18 01:01:34.662729] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.498 01:01:34 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00 00:19:00.498 01:01:34 -- bdev/bdev_raid.sh@380 -- # '[' -z d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00 ']' 00:19:00.498 01:01:34 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:00.757 [2024-11-18 01:01:34.926952] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.757 [2024-11-18 01:01:34.927002] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.757 [2024-11-18 01:01:34.927131] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.757 [2024-11-18 01:01:34.927228] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.757 [2024-11-18 01:01:34.927239] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:19:00.757 01:01:34 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.757 01:01:34 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:01.015 01:01:35 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:01.015 01:01:35 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:01.015 01:01:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.015 01:01:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:01.274 01:01:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.274 01:01:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:01.533 01:01:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.533 01:01:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:01.533 01:01:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.533 01:01:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:01.792 01:01:36 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:01.792 01:01:36 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:02.050 01:01:36 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:02.050 01:01:36 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:02.050 01:01:36 -- common/autotest_common.sh@650 -- # local es=0 00:19:02.050 01:01:36 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:02.050 01:01:36 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.050 01:01:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.050 01:01:36 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.050 01:01:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.050 01:01:36 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.050 01:01:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.050 01:01:36 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.050 01:01:36 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:02.051 01:01:36 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:02.309 [2024-11-18 01:01:36.566891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:02.309 [2024-11-18 01:01:36.569619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:02.309 [2024-11-18 01:01:36.569684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:02.309 [2024-11-18 01:01:36.569715] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:02.309 [2024-11-18 01:01:36.569767] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:02.309 [2024-11-18 01:01:36.570803] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:02.309 [2024-11-18 01:01:36.570955] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:02.309 [2024-11-18 01:01:36.571358] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:02.309 [2024-11-18 01:01:36.571623] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.309 [2024-11-18 01:01:36.571647] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:19:02.309 request: 00:19:02.309 { 00:19:02.309 "name": "raid_bdev1", 00:19:02.309 "raid_level": "concat", 00:19:02.309 "base_bdevs": [ 00:19:02.309 "malloc1", 00:19:02.309 "malloc2", 00:19:02.309 "malloc3", 00:19:02.309 "malloc4" 00:19:02.309 ], 00:19:02.309 "superblock": false, 00:19:02.309 "strip_size_kb": 64, 00:19:02.309 "method": "bdev_raid_create", 00:19:02.309 "req_id": 1 00:19:02.309 } 00:19:02.309 Got JSON-RPC error response 00:19:02.309 response: 00:19:02.309 { 00:19:02.309 "code": -17, 00:19:02.309 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:02.309 } 00:19:02.309 01:01:36 -- common/autotest_common.sh@653 -- # es=1 00:19:02.309 01:01:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:02.309 01:01:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:02.309 01:01:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:02.309 01:01:36 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.309 01:01:36 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:02.568 01:01:36 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:02.568 01:01:36 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:02.568 01:01:36 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.568 [2024-11-18 01:01:36.956172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.568 [2024-11-18 01:01:36.956567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.568 [2024-11-18 01:01:36.957097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:02.568 [2024-11-18 01:01:36.957224] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.568 [2024-11-18 01:01:36.960351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.568 [2024-11-18 01:01:36.960537] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.568 [2024-11-18 01:01:36.960948] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:02.568 [2024-11-18 01:01:36.961030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.568 pt1 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.827 01:01:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.085 01:01:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.085 "name": "raid_bdev1", 00:19:03.085 "uuid": "d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00", 00:19:03.085 "strip_size_kb": 64, 00:19:03.085 "state": "configuring", 00:19:03.086 "raid_level": "concat", 00:19:03.086 "superblock": true, 00:19:03.086 "num_base_bdevs": 4, 00:19:03.086 "num_base_bdevs_discovered": 1, 00:19:03.086 "num_base_bdevs_operational": 4, 00:19:03.086 "base_bdevs_list": [ 00:19:03.086 { 00:19:03.086 "name": "pt1", 00:19:03.086 "uuid": "c061271b-d211-518d-a003-a779683d2b7c", 00:19:03.086 "is_configured": true, 00:19:03.086 "data_offset": 2048, 00:19:03.086 "data_size": 63488 00:19:03.086 }, 00:19:03.086 { 00:19:03.086 "name": null, 00:19:03.086 "uuid": "50bf7005-9b31-58ea-9978-a6f4f9125c8b", 00:19:03.086 "is_configured": false, 00:19:03.086 "data_offset": 2048, 00:19:03.086 "data_size": 63488 00:19:03.086 }, 00:19:03.086 { 00:19:03.086 "name": null, 00:19:03.086 "uuid": "8b39ac6c-a102-54aa-a104-9b585ba38bca", 00:19:03.086 "is_configured": false, 00:19:03.086 "data_offset": 2048, 00:19:03.086 "data_size": 63488 00:19:03.086 }, 00:19:03.086 { 00:19:03.086 "name": null, 00:19:03.086 "uuid": "82f0e4c5-f2ab-5118-84e3-2ec60b064f72", 00:19:03.086 "is_configured": false, 00:19:03.086 "data_offset": 2048, 00:19:03.086 "data_size": 63488 00:19:03.086 } 00:19:03.086 ] 00:19:03.086 }' 00:19:03.086 01:01:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.086 01:01:37 -- common/autotest_common.sh@10 -- # set +x 00:19:03.653 01:01:37 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:03.653 01:01:37 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.653 [2024-11-18 01:01:37.954754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.653 [2024-11-18 01:01:37.954890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.653 [2024-11-18 01:01:37.954949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:03.653 [2024-11-18 01:01:37.954973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.653 [2024-11-18 01:01:37.955475] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.653 [2024-11-18 01:01:37.955520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.653 [2024-11-18 01:01:37.955623] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:03.653 [2024-11-18 01:01:37.955648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.653 pt2 00:19:03.653 01:01:37 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:03.912 [2024-11-18 01:01:38.150754] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.912 01:01:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.171 01:01:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.171 "name": "raid_bdev1", 00:19:04.171 "uuid": "d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00", 00:19:04.171 "strip_size_kb": 64, 00:19:04.171 "state": "configuring", 00:19:04.171 "raid_level": "concat", 00:19:04.171 "superblock": true, 00:19:04.171 "num_base_bdevs": 4, 00:19:04.171 "num_base_bdevs_discovered": 1, 00:19:04.171 "num_base_bdevs_operational": 4, 00:19:04.171 "base_bdevs_list": [ 00:19:04.171 { 00:19:04.171 "name": "pt1", 00:19:04.171 "uuid": "c061271b-d211-518d-a003-a779683d2b7c", 00:19:04.171 "is_configured": true, 00:19:04.171 "data_offset": 2048, 00:19:04.171 "data_size": 63488 00:19:04.171 }, 00:19:04.171 { 00:19:04.171 "name": null, 00:19:04.171 "uuid": "50bf7005-9b31-58ea-9978-a6f4f9125c8b", 00:19:04.171 "is_configured": false, 00:19:04.171 "data_offset": 2048, 00:19:04.171 "data_size": 63488 00:19:04.171 }, 00:19:04.171 { 00:19:04.171 "name": null, 00:19:04.171 "uuid": "8b39ac6c-a102-54aa-a104-9b585ba38bca", 00:19:04.171 "is_configured": false, 00:19:04.171 "data_offset": 2048, 00:19:04.171 "data_size": 63488 00:19:04.171 }, 00:19:04.171 { 00:19:04.171 "name": null, 00:19:04.171 "uuid": "82f0e4c5-f2ab-5118-84e3-2ec60b064f72", 00:19:04.171 "is_configured": false, 00:19:04.171 "data_offset": 2048, 00:19:04.171 "data_size": 63488 00:19:04.171 } 00:19:04.171 ] 00:19:04.171 }' 00:19:04.171 01:01:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.171 01:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:04.739 01:01:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:04.739 01:01:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:04.739 01:01:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.996 [2024-11-18 01:01:39.218891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.996 [2024-11-18 01:01:39.219012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.996 [2024-11-18 01:01:39.219057] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:04.996 [2024-11-18 01:01:39.219083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.996 [2024-11-18 01:01:39.219580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.996 [2024-11-18 01:01:39.219640] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.996 [2024-11-18 01:01:39.219732] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:04.996 [2024-11-18 01:01:39.219754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.996 pt2 00:19:04.996 01:01:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:04.996 01:01:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:04.996 01:01:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:05.254 [2024-11-18 01:01:39.491002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:05.254 [2024-11-18 01:01:39.491135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.254 [2024-11-18 01:01:39.491174] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:05.254 [2024-11-18 01:01:39.491204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.254 [2024-11-18 01:01:39.491679] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.254 [2024-11-18 01:01:39.491737] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:05.254 [2024-11-18 01:01:39.491849] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:05.254 [2024-11-18 01:01:39.491874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:05.254 pt3 00:19:05.254 01:01:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:05.254 01:01:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:05.254 01:01:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:05.513 [2024-11-18 01:01:39.755025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:05.513 [2024-11-18 01:01:39.755141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.513 [2024-11-18 01:01:39.755182] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:05.513 [2024-11-18 01:01:39.755211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.513 [2024-11-18 01:01:39.755672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.513 [2024-11-18 01:01:39.755720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:05.513 [2024-11-18 01:01:39.755840] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:05.513 [2024-11-18 01:01:39.755865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:05.513 [2024-11-18 01:01:39.756002] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:05.513 [2024-11-18 01:01:39.756011] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:05.513 [2024-11-18 01:01:39.756094] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:19:05.513 [2024-11-18 01:01:39.756443] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:05.513 [2024-11-18 01:01:39.756464] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:05.513 [2024-11-18 01:01:39.756565] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.513 pt4 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.513 01:01:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.773 01:01:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.773 "name": "raid_bdev1", 00:19:05.773 "uuid": "d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00", 00:19:05.773 "strip_size_kb": 64, 00:19:05.773 "state": "online", 00:19:05.773 "raid_level": "concat", 00:19:05.773 "superblock": true, 00:19:05.773 "num_base_bdevs": 4, 00:19:05.773 "num_base_bdevs_discovered": 4, 00:19:05.773 "num_base_bdevs_operational": 4, 00:19:05.773 "base_bdevs_list": [ 00:19:05.773 { 00:19:05.773 "name": "pt1", 00:19:05.773 "uuid": "c061271b-d211-518d-a003-a779683d2b7c", 00:19:05.773 "is_configured": true, 00:19:05.773 "data_offset": 2048, 00:19:05.773 "data_size": 63488 00:19:05.773 }, 00:19:05.773 { 00:19:05.773 "name": "pt2", 00:19:05.773 "uuid": "50bf7005-9b31-58ea-9978-a6f4f9125c8b", 00:19:05.773 "is_configured": true, 00:19:05.773 "data_offset": 2048, 00:19:05.773 "data_size": 63488 00:19:05.773 }, 00:19:05.773 { 00:19:05.773 "name": "pt3", 00:19:05.773 "uuid": "8b39ac6c-a102-54aa-a104-9b585ba38bca", 00:19:05.773 "is_configured": true, 00:19:05.773 "data_offset": 2048, 00:19:05.773 "data_size": 63488 00:19:05.773 }, 00:19:05.773 { 00:19:05.773 "name": "pt4", 00:19:05.773 "uuid": "82f0e4c5-f2ab-5118-84e3-2ec60b064f72", 00:19:05.773 "is_configured": true, 00:19:05.773 "data_offset": 2048, 00:19:05.773 "data_size": 63488 00:19:05.773 } 00:19:05.773 ] 00:19:05.773 }' 00:19:05.773 01:01:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.773 01:01:40 -- common/autotest_common.sh@10 -- # set +x 00:19:06.341 01:01:40 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:06.341 01:01:40 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:06.599 [2024-11-18 01:01:40.767363] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.599 01:01:40 -- bdev/bdev_raid.sh@430 -- # '[' d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00 '!=' d88ca85d-523f-4a63-9e1a-0a2b4e3b0c00 ']' 00:19:06.599 01:01:40 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:06.599 01:01:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:06.599 01:01:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:06.599 01:01:40 -- bdev/bdev_raid.sh@511 -- # killprocess 131221 00:19:06.599 01:01:40 -- common/autotest_common.sh@936 -- # '[' -z 131221 ']' 00:19:06.599 01:01:40 -- common/autotest_common.sh@940 -- # kill -0 131221 00:19:06.599 01:01:40 -- common/autotest_common.sh@941 -- # uname 00:19:06.599 01:01:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.599 01:01:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131221 00:19:06.599 01:01:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:06.599 01:01:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:06.599 01:01:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131221' 00:19:06.599 killing process with pid 131221 00:19:06.599 01:01:40 -- common/autotest_common.sh@955 -- # kill 131221 00:19:06.599 [2024-11-18 01:01:40.822474] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.599 01:01:40 -- common/autotest_common.sh@960 -- # wait 131221 00:19:06.599 [2024-11-18 01:01:40.822584] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.599 [2024-11-18 01:01:40.822669] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.599 [2024-11-18 01:01:40.822682] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:06.599 [2024-11-18 01:01:40.907687] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:07.167 00:19:07.167 real 0m10.604s 00:19:07.167 user 0m18.342s 00:19:07.167 sys 0m2.039s 00:19:07.167 ************************************ 00:19:07.167 END TEST raid_superblock_test 00:19:07.167 ************************************ 00:19:07.167 01:01:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:07.167 01:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:07.167 01:01:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:07.167 01:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:07.167 01:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:07.167 ************************************ 00:19:07.167 START TEST raid_state_function_test 00:19:07.167 ************************************ 00:19:07.167 01:01:41 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=131537 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:07.167 Process raid pid: 131537 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131537' 00:19:07.167 01:01:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131537 /var/tmp/spdk-raid.sock 00:19:07.167 01:01:41 -- common/autotest_common.sh@829 -- # '[' -z 131537 ']' 00:19:07.167 01:01:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:07.167 01:01:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:07.167 01:01:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:07.167 01:01:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.167 01:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:07.167 [2024-11-18 01:01:41.458159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:07.167 [2024-11-18 01:01:41.458454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.426 [2024-11-18 01:01:41.613690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.426 [2024-11-18 01:01:41.700211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.426 [2024-11-18 01:01:41.778883] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.993 01:01:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.993 01:01:42 -- common/autotest_common.sh@862 -- # return 0 00:19:07.993 01:01:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:08.252 [2024-11-18 01:01:42.512717] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.252 [2024-11-18 01:01:42.512834] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.252 [2024-11-18 01:01:42.512847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.252 [2024-11-18 01:01:42.512866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.252 [2024-11-18 01:01:42.512873] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.252 [2024-11-18 01:01:42.512926] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.252 [2024-11-18 01:01:42.512933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:08.252 [2024-11-18 01:01:42.512962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.252 01:01:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.589 01:01:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.589 "name": "Existed_Raid", 00:19:08.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.589 "strip_size_kb": 0, 00:19:08.589 "state": "configuring", 00:19:08.589 "raid_level": "raid1", 00:19:08.589 "superblock": false, 00:19:08.589 "num_base_bdevs": 4, 00:19:08.589 "num_base_bdevs_discovered": 0, 00:19:08.589 "num_base_bdevs_operational": 4, 00:19:08.589 "base_bdevs_list": [ 00:19:08.589 { 00:19:08.589 "name": "BaseBdev1", 00:19:08.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.589 "is_configured": false, 00:19:08.589 "data_offset": 0, 00:19:08.589 "data_size": 0 00:19:08.589 }, 00:19:08.589 { 00:19:08.589 "name": "BaseBdev2", 00:19:08.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.590 "is_configured": false, 00:19:08.590 "data_offset": 0, 00:19:08.590 "data_size": 0 00:19:08.590 }, 00:19:08.590 { 00:19:08.590 "name": "BaseBdev3", 00:19:08.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.590 "is_configured": false, 00:19:08.590 "data_offset": 0, 00:19:08.590 "data_size": 0 00:19:08.590 }, 00:19:08.590 { 00:19:08.590 "name": "BaseBdev4", 00:19:08.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.590 "is_configured": false, 00:19:08.590 "data_offset": 0, 00:19:08.590 "data_size": 0 00:19:08.590 } 00:19:08.590 ] 00:19:08.590 }' 00:19:08.590 01:01:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.590 01:01:42 -- common/autotest_common.sh@10 -- # set +x 00:19:09.191 01:01:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:09.191 [2024-11-18 01:01:43.580761] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.191 [2024-11-18 01:01:43.580828] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:09.450 01:01:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:09.450 [2024-11-18 01:01:43.776826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.450 [2024-11-18 01:01:43.776922] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.450 [2024-11-18 01:01:43.776933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.450 [2024-11-18 01:01:43.776960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.450 [2024-11-18 01:01:43.776967] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:09.450 [2024-11-18 01:01:43.776986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:09.450 [2024-11-18 01:01:43.776992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:09.450 [2024-11-18 01:01:43.777018] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:09.450 01:01:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:09.708 [2024-11-18 01:01:44.049000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.708 BaseBdev1 00:19:09.708 01:01:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:09.708 01:01:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:09.708 01:01:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:09.708 01:01:44 -- common/autotest_common.sh@899 -- # local i 00:19:09.708 01:01:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:09.708 01:01:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:09.708 01:01:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.967 01:01:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:10.226 [ 00:19:10.226 { 00:19:10.226 "name": "BaseBdev1", 00:19:10.226 "aliases": [ 00:19:10.226 "bd0b6ba5-b1ab-48d6-9271-dcaba0224dfc" 00:19:10.226 ], 00:19:10.226 "product_name": "Malloc disk", 00:19:10.226 "block_size": 512, 00:19:10.226 "num_blocks": 65536, 00:19:10.226 "uuid": "bd0b6ba5-b1ab-48d6-9271-dcaba0224dfc", 00:19:10.226 "assigned_rate_limits": { 00:19:10.226 "rw_ios_per_sec": 0, 00:19:10.226 "rw_mbytes_per_sec": 0, 00:19:10.226 "r_mbytes_per_sec": 0, 00:19:10.226 "w_mbytes_per_sec": 0 00:19:10.226 }, 00:19:10.226 "claimed": true, 00:19:10.226 "claim_type": "exclusive_write", 00:19:10.226 "zoned": false, 00:19:10.226 "supported_io_types": { 00:19:10.226 "read": true, 00:19:10.226 "write": true, 00:19:10.226 "unmap": true, 00:19:10.226 "write_zeroes": true, 00:19:10.226 "flush": true, 00:19:10.226 "reset": true, 00:19:10.226 "compare": false, 00:19:10.226 "compare_and_write": false, 00:19:10.226 "abort": true, 00:19:10.226 "nvme_admin": false, 00:19:10.226 "nvme_io": false 00:19:10.226 }, 00:19:10.226 "memory_domains": [ 00:19:10.226 { 00:19:10.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.226 "dma_device_type": 2 00:19:10.226 } 00:19:10.226 ], 00:19:10.226 "driver_specific": {} 00:19:10.226 } 00:19:10.226 ] 00:19:10.226 01:01:44 -- common/autotest_common.sh@905 -- # return 0 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.226 01:01:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.485 01:01:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.485 "name": "Existed_Raid", 00:19:10.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.485 "strip_size_kb": 0, 00:19:10.485 "state": "configuring", 00:19:10.485 "raid_level": "raid1", 00:19:10.485 "superblock": false, 00:19:10.485 "num_base_bdevs": 4, 00:19:10.485 "num_base_bdevs_discovered": 1, 00:19:10.485 "num_base_bdevs_operational": 4, 00:19:10.485 "base_bdevs_list": [ 00:19:10.485 { 00:19:10.485 "name": "BaseBdev1", 00:19:10.486 "uuid": "bd0b6ba5-b1ab-48d6-9271-dcaba0224dfc", 00:19:10.486 "is_configured": true, 00:19:10.486 "data_offset": 0, 00:19:10.486 "data_size": 65536 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "name": "BaseBdev2", 00:19:10.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.486 "is_configured": false, 00:19:10.486 "data_offset": 0, 00:19:10.486 "data_size": 0 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "name": "BaseBdev3", 00:19:10.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.486 "is_configured": false, 00:19:10.486 "data_offset": 0, 00:19:10.486 "data_size": 0 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "name": "BaseBdev4", 00:19:10.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.486 "is_configured": false, 00:19:10.486 "data_offset": 0, 00:19:10.486 "data_size": 0 00:19:10.486 } 00:19:10.486 ] 00:19:10.486 }' 00:19:10.486 01:01:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.486 01:01:44 -- common/autotest_common.sh@10 -- # set +x 00:19:11.052 01:01:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:11.311 [2024-11-18 01:01:45.589317] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:11.311 [2024-11-18 01:01:45.589403] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:11.311 01:01:45 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:11.311 01:01:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:11.570 [2024-11-18 01:01:45.777460] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.570 [2024-11-18 01:01:45.779978] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.570 [2024-11-18 01:01:45.780077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.570 [2024-11-18 01:01:45.780088] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.570 [2024-11-18 01:01:45.780114] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.570 [2024-11-18 01:01:45.780123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:11.570 [2024-11-18 01:01:45.780141] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.570 01:01:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.571 01:01:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.571 01:01:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.571 01:01:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.571 01:01:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.830 01:01:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.830 "name": "Existed_Raid", 00:19:11.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.830 "strip_size_kb": 0, 00:19:11.830 "state": "configuring", 00:19:11.830 "raid_level": "raid1", 00:19:11.830 "superblock": false, 00:19:11.830 "num_base_bdevs": 4, 00:19:11.830 "num_base_bdevs_discovered": 1, 00:19:11.830 "num_base_bdevs_operational": 4, 00:19:11.830 "base_bdevs_list": [ 00:19:11.830 { 00:19:11.830 "name": "BaseBdev1", 00:19:11.830 "uuid": "bd0b6ba5-b1ab-48d6-9271-dcaba0224dfc", 00:19:11.830 "is_configured": true, 00:19:11.830 "data_offset": 0, 00:19:11.830 "data_size": 65536 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "name": "BaseBdev2", 00:19:11.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.830 "is_configured": false, 00:19:11.830 "data_offset": 0, 00:19:11.830 "data_size": 0 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "name": "BaseBdev3", 00:19:11.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.830 "is_configured": false, 00:19:11.830 "data_offset": 0, 00:19:11.830 "data_size": 0 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "name": "BaseBdev4", 00:19:11.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.830 "is_configured": false, 00:19:11.830 "data_offset": 0, 00:19:11.830 "data_size": 0 00:19:11.830 } 00:19:11.830 ] 00:19:11.830 }' 00:19:11.830 01:01:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.830 01:01:46 -- common/autotest_common.sh@10 -- # set +x 00:19:12.398 01:01:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:12.657 [2024-11-18 01:01:46.879641] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.657 BaseBdev2 00:19:12.657 01:01:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:12.657 01:01:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:12.657 01:01:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:12.657 01:01:46 -- common/autotest_common.sh@899 -- # local i 00:19:12.657 01:01:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:12.657 01:01:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:12.657 01:01:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.915 01:01:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.175 [ 00:19:13.175 { 00:19:13.175 "name": "BaseBdev2", 00:19:13.175 "aliases": [ 00:19:13.175 "efcfd9f6-2b72-4647-8b99-cbd0a8980853" 00:19:13.175 ], 00:19:13.175 "product_name": "Malloc disk", 00:19:13.175 "block_size": 512, 00:19:13.175 "num_blocks": 65536, 00:19:13.175 "uuid": "efcfd9f6-2b72-4647-8b99-cbd0a8980853", 00:19:13.175 "assigned_rate_limits": { 00:19:13.175 "rw_ios_per_sec": 0, 00:19:13.175 "rw_mbytes_per_sec": 0, 00:19:13.175 "r_mbytes_per_sec": 0, 00:19:13.175 "w_mbytes_per_sec": 0 00:19:13.175 }, 00:19:13.175 "claimed": true, 00:19:13.175 "claim_type": "exclusive_write", 00:19:13.175 "zoned": false, 00:19:13.175 "supported_io_types": { 00:19:13.175 "read": true, 00:19:13.175 "write": true, 00:19:13.175 "unmap": true, 00:19:13.175 "write_zeroes": true, 00:19:13.175 "flush": true, 00:19:13.175 "reset": true, 00:19:13.175 "compare": false, 00:19:13.175 "compare_and_write": false, 00:19:13.175 "abort": true, 00:19:13.175 "nvme_admin": false, 00:19:13.175 "nvme_io": false 00:19:13.175 }, 00:19:13.175 "memory_domains": [ 00:19:13.175 { 00:19:13.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.175 "dma_device_type": 2 00:19:13.175 } 00:19:13.175 ], 00:19:13.175 "driver_specific": {} 00:19:13.175 } 00:19:13.175 ] 00:19:13.175 01:01:47 -- common/autotest_common.sh@905 -- # return 0 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.175 01:01:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.434 01:01:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.434 "name": "Existed_Raid", 00:19:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.434 "strip_size_kb": 0, 00:19:13.434 "state": "configuring", 00:19:13.434 "raid_level": "raid1", 00:19:13.434 "superblock": false, 00:19:13.434 "num_base_bdevs": 4, 00:19:13.434 "num_base_bdevs_discovered": 2, 00:19:13.434 "num_base_bdevs_operational": 4, 00:19:13.434 "base_bdevs_list": [ 00:19:13.434 { 00:19:13.434 "name": "BaseBdev1", 00:19:13.434 "uuid": "bd0b6ba5-b1ab-48d6-9271-dcaba0224dfc", 00:19:13.434 "is_configured": true, 00:19:13.434 "data_offset": 0, 00:19:13.434 "data_size": 65536 00:19:13.434 }, 00:19:13.434 { 00:19:13.434 "name": "BaseBdev2", 00:19:13.434 "uuid": "efcfd9f6-2b72-4647-8b99-cbd0a8980853", 00:19:13.434 "is_configured": true, 00:19:13.434 "data_offset": 0, 00:19:13.434 "data_size": 65536 00:19:13.434 }, 00:19:13.434 { 00:19:13.434 "name": "BaseBdev3", 00:19:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.434 "is_configured": false, 00:19:13.434 "data_offset": 0, 00:19:13.434 "data_size": 0 00:19:13.434 }, 00:19:13.434 { 00:19:13.434 "name": "BaseBdev4", 00:19:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.434 "is_configured": false, 00:19:13.434 "data_offset": 0, 00:19:13.434 "data_size": 0 00:19:13.434 } 00:19:13.434 ] 00:19:13.434 }' 00:19:13.434 01:01:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.434 01:01:47 -- common/autotest_common.sh@10 -- # set +x 00:19:14.001 01:01:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:14.001 [2024-11-18 01:01:48.369573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.001 BaseBdev3 00:19:14.001 01:01:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:14.001 01:01:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:14.001 01:01:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:14.001 01:01:48 -- common/autotest_common.sh@899 -- # local i 00:19:14.001 01:01:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:14.001 01:01:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:14.001 01:01:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.260 01:01:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:14.518 [ 00:19:14.518 { 00:19:14.518 "name": "BaseBdev3", 00:19:14.518 "aliases": [ 00:19:14.518 "88bbe701-faa8-4556-b8cb-cbb97da277d9" 00:19:14.518 ], 00:19:14.518 "product_name": "Malloc disk", 00:19:14.518 "block_size": 512, 00:19:14.518 "num_blocks": 65536, 00:19:14.518 "uuid": "88bbe701-faa8-4556-b8cb-cbb97da277d9", 00:19:14.518 "assigned_rate_limits": { 00:19:14.518 "rw_ios_per_sec": 0, 00:19:14.518 "rw_mbytes_per_sec": 0, 00:19:14.518 "r_mbytes_per_sec": 0, 00:19:14.518 "w_mbytes_per_sec": 0 00:19:14.518 }, 00:19:14.518 "claimed": true, 00:19:14.518 "claim_type": "exclusive_write", 00:19:14.518 "zoned": false, 00:19:14.518 "supported_io_types": { 00:19:14.518 "read": true, 00:19:14.518 "write": true, 00:19:14.518 "unmap": true, 00:19:14.518 "write_zeroes": true, 00:19:14.518 "flush": true, 00:19:14.518 "reset": true, 00:19:14.518 "compare": false, 00:19:14.518 "compare_and_write": false, 00:19:14.518 "abort": true, 00:19:14.518 "nvme_admin": false, 00:19:14.518 "nvme_io": false 00:19:14.518 }, 00:19:14.518 "memory_domains": [ 00:19:14.518 { 00:19:14.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.518 "dma_device_type": 2 00:19:14.518 } 00:19:14.518 ], 00:19:14.518 "driver_specific": {} 00:19:14.518 } 00:19:14.518 ] 00:19:14.518 01:01:48 -- common/autotest_common.sh@905 -- # return 0 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.518 01:01:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.777 01:01:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.777 "name": "Existed_Raid", 00:19:14.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.777 "strip_size_kb": 0, 00:19:14.777 "state": "configuring", 00:19:14.777 "raid_level": "raid1", 00:19:14.777 "superblock": false, 00:19:14.777 "num_base_bdevs": 4, 00:19:14.777 "num_base_bdevs_discovered": 3, 00:19:14.777 "num_base_bdevs_operational": 4, 00:19:14.777 "base_bdevs_list": [ 00:19:14.777 { 00:19:14.777 "name": "BaseBdev1", 00:19:14.777 "uuid": "bd0b6ba5-b1ab-48d6-9271-dcaba0224dfc", 00:19:14.777 "is_configured": true, 00:19:14.777 "data_offset": 0, 00:19:14.777 "data_size": 65536 00:19:14.777 }, 00:19:14.777 { 00:19:14.777 "name": "BaseBdev2", 00:19:14.777 "uuid": "efcfd9f6-2b72-4647-8b99-cbd0a8980853", 00:19:14.777 "is_configured": true, 00:19:14.777 "data_offset": 0, 00:19:14.777 "data_size": 65536 00:19:14.777 }, 00:19:14.777 { 00:19:14.777 "name": "BaseBdev3", 00:19:14.777 "uuid": "88bbe701-faa8-4556-b8cb-cbb97da277d9", 00:19:14.777 "is_configured": true, 00:19:14.777 "data_offset": 0, 00:19:14.777 "data_size": 65536 00:19:14.777 }, 00:19:14.777 { 00:19:14.777 "name": "BaseBdev4", 00:19:14.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.777 "is_configured": false, 00:19:14.777 "data_offset": 0, 00:19:14.777 "data_size": 0 00:19:14.777 } 00:19:14.777 ] 00:19:14.777 }' 00:19:14.777 01:01:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.777 01:01:48 -- common/autotest_common.sh@10 -- # set +x 00:19:15.344 01:01:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:15.603 [2024-11-18 01:01:49.763674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:15.603 [2024-11-18 01:01:49.763767] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:19:15.603 [2024-11-18 01:01:49.763777] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:15.603 [2024-11-18 01:01:49.763948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:19:15.603 [2024-11-18 01:01:49.764438] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:19:15.603 [2024-11-18 01:01:49.764460] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:19:15.603 [2024-11-18 01:01:49.764778] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.603 BaseBdev4 00:19:15.603 01:01:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:15.603 01:01:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:15.603 01:01:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:15.603 01:01:49 -- common/autotest_common.sh@899 -- # local i 00:19:15.603 01:01:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:15.603 01:01:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:15.603 01:01:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.603 01:01:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:15.861 [ 00:19:15.861 { 00:19:15.861 "name": "BaseBdev4", 00:19:15.861 "aliases": [ 00:19:15.862 "a2eac3f9-e047-44bc-b39e-7497e7d00c18" 00:19:15.862 ], 00:19:15.862 "product_name": "Malloc disk", 00:19:15.862 "block_size": 512, 00:19:15.862 "num_blocks": 65536, 00:19:15.862 "uuid": "a2eac3f9-e047-44bc-b39e-7497e7d00c18", 00:19:15.862 "assigned_rate_limits": { 00:19:15.862 "rw_ios_per_sec": 0, 00:19:15.862 "rw_mbytes_per_sec": 0, 00:19:15.862 "r_mbytes_per_sec": 0, 00:19:15.862 "w_mbytes_per_sec": 0 00:19:15.862 }, 00:19:15.862 "claimed": true, 00:19:15.862 "claim_type": "exclusive_write", 00:19:15.862 "zoned": false, 00:19:15.862 "supported_io_types": { 00:19:15.862 "read": true, 00:19:15.862 "write": true, 00:19:15.862 "unmap": true, 00:19:15.862 "write_zeroes": true, 00:19:15.862 "flush": true, 00:19:15.862 "reset": true, 00:19:15.862 "compare": false, 00:19:15.862 "compare_and_write": false, 00:19:15.862 "abort": true, 00:19:15.862 "nvme_admin": false, 00:19:15.862 "nvme_io": false 00:19:15.862 }, 00:19:15.862 "memory_domains": [ 00:19:15.862 { 00:19:15.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.862 "dma_device_type": 2 00:19:15.862 } 00:19:15.862 ], 00:19:15.862 "driver_specific": {} 00:19:15.862 } 00:19:15.862 ] 00:19:15.862 01:01:50 -- common/autotest_common.sh@905 -- # return 0 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.862 01:01:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.120 01:01:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.120 "name": "Existed_Raid", 00:19:16.120 "uuid": "1f449c1c-1f8e-4196-a156-d2b275151638", 00:19:16.120 "strip_size_kb": 0, 00:19:16.120 "state": "online", 00:19:16.120 "raid_level": "raid1", 00:19:16.120 "superblock": false, 00:19:16.120 "num_base_bdevs": 4, 00:19:16.120 "num_base_bdevs_discovered": 4, 00:19:16.120 "num_base_bdevs_operational": 4, 00:19:16.120 "base_bdevs_list": [ 00:19:16.120 { 00:19:16.120 "name": "BaseBdev1", 00:19:16.120 "uuid": "bd0b6ba5-b1ab-48d6-9271-dcaba0224dfc", 00:19:16.120 "is_configured": true, 00:19:16.120 "data_offset": 0, 00:19:16.120 "data_size": 65536 00:19:16.120 }, 00:19:16.120 { 00:19:16.120 "name": "BaseBdev2", 00:19:16.120 "uuid": "efcfd9f6-2b72-4647-8b99-cbd0a8980853", 00:19:16.120 "is_configured": true, 00:19:16.120 "data_offset": 0, 00:19:16.120 "data_size": 65536 00:19:16.120 }, 00:19:16.120 { 00:19:16.120 "name": "BaseBdev3", 00:19:16.120 "uuid": "88bbe701-faa8-4556-b8cb-cbb97da277d9", 00:19:16.120 "is_configured": true, 00:19:16.120 "data_offset": 0, 00:19:16.120 "data_size": 65536 00:19:16.120 }, 00:19:16.120 { 00:19:16.120 "name": "BaseBdev4", 00:19:16.120 "uuid": "a2eac3f9-e047-44bc-b39e-7497e7d00c18", 00:19:16.120 "is_configured": true, 00:19:16.120 "data_offset": 0, 00:19:16.120 "data_size": 65536 00:19:16.120 } 00:19:16.120 ] 00:19:16.120 }' 00:19:16.120 01:01:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.120 01:01:50 -- common/autotest_common.sh@10 -- # set +x 00:19:16.686 01:01:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:16.944 [2024-11-18 01:01:51.300223] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.944 01:01:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:16.944 01:01:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:16.944 01:01:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:16.944 01:01:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.203 "name": "Existed_Raid", 00:19:17.203 "uuid": "1f449c1c-1f8e-4196-a156-d2b275151638", 00:19:17.203 "strip_size_kb": 0, 00:19:17.203 "state": "online", 00:19:17.203 "raid_level": "raid1", 00:19:17.203 "superblock": false, 00:19:17.203 "num_base_bdevs": 4, 00:19:17.203 "num_base_bdevs_discovered": 3, 00:19:17.203 "num_base_bdevs_operational": 3, 00:19:17.203 "base_bdevs_list": [ 00:19:17.203 { 00:19:17.203 "name": null, 00:19:17.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.203 "is_configured": false, 00:19:17.203 "data_offset": 0, 00:19:17.203 "data_size": 65536 00:19:17.203 }, 00:19:17.203 { 00:19:17.203 "name": "BaseBdev2", 00:19:17.203 "uuid": "efcfd9f6-2b72-4647-8b99-cbd0a8980853", 00:19:17.203 "is_configured": true, 00:19:17.203 "data_offset": 0, 00:19:17.203 "data_size": 65536 00:19:17.203 }, 00:19:17.203 { 00:19:17.203 "name": "BaseBdev3", 00:19:17.203 "uuid": "88bbe701-faa8-4556-b8cb-cbb97da277d9", 00:19:17.203 "is_configured": true, 00:19:17.203 "data_offset": 0, 00:19:17.203 "data_size": 65536 00:19:17.203 }, 00:19:17.203 { 00:19:17.203 "name": "BaseBdev4", 00:19:17.203 "uuid": "a2eac3f9-e047-44bc-b39e-7497e7d00c18", 00:19:17.203 "is_configured": true, 00:19:17.203 "data_offset": 0, 00:19:17.203 "data_size": 65536 00:19:17.203 } 00:19:17.203 ] 00:19:17.203 }' 00:19:17.203 01:01:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.203 01:01:51 -- common/autotest_common.sh@10 -- # set +x 00:19:18.139 01:01:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:18.139 01:01:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.139 01:01:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.139 01:01:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:18.139 01:01:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:18.139 01:01:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.139 01:01:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:18.399 [2024-11-18 01:01:52.673921] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.399 01:01:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:18.399 01:01:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.399 01:01:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.399 01:01:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:18.658 01:01:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:18.658 01:01:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.658 01:01:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:18.917 [2024-11-18 01:01:53.167276] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.917 01:01:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:18.917 01:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.917 01:01:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.917 01:01:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:19.202 01:01:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:19.202 01:01:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:19.202 01:01:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:19.481 [2024-11-18 01:01:53.644512] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:19.481 [2024-11-18 01:01:53.644563] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.481 [2024-11-18 01:01:53.644648] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.481 [2024-11-18 01:01:53.666074] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.481 [2024-11-18 01:01:53.666112] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:19:19.481 01:01:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:19.481 01:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:19.481 01:01:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.482 01:01:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:19.740 01:01:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:19.740 01:01:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:19.741 01:01:53 -- bdev/bdev_raid.sh@287 -- # killprocess 131537 00:19:19.741 01:01:53 -- common/autotest_common.sh@936 -- # '[' -z 131537 ']' 00:19:19.741 01:01:53 -- common/autotest_common.sh@940 -- # kill -0 131537 00:19:19.741 01:01:53 -- common/autotest_common.sh@941 -- # uname 00:19:19.741 01:01:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:19.741 01:01:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131537 00:19:19.741 01:01:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:19.741 01:01:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:19.741 killing process with pid 131537 00:19:19.741 01:01:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131537' 00:19:19.741 01:01:53 -- common/autotest_common.sh@955 -- # kill 131537 00:19:19.741 [2024-11-18 01:01:53.997292] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:19.741 [2024-11-18 01:01:53.997420] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.741 01:01:53 -- common/autotest_common.sh@960 -- # wait 131537 00:19:20.307 01:01:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:20.307 00:19:20.307 real 0m13.019s 00:19:20.307 user 0m23.117s 00:19:20.307 sys 0m2.297s 00:19:20.307 01:01:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:20.307 01:01:54 -- common/autotest_common.sh@10 -- # set +x 00:19:20.307 ************************************ 00:19:20.307 END TEST raid_state_function_test 00:19:20.307 ************************************ 00:19:20.307 01:01:54 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:20.307 01:01:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:20.307 01:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:20.307 01:01:54 -- common/autotest_common.sh@10 -- # set +x 00:19:20.307 ************************************ 00:19:20.307 START TEST raid_state_function_test_sb 00:19:20.307 ************************************ 00:19:20.307 01:01:54 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true 00:19:20.307 01:01:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=131956 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131956' 00:19:20.308 Process raid pid: 131956 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:20.308 01:01:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131956 /var/tmp/spdk-raid.sock 00:19:20.308 01:01:54 -- common/autotest_common.sh@829 -- # '[' -z 131956 ']' 00:19:20.308 01:01:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:20.308 01:01:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:20.308 01:01:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:20.308 01:01:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.308 01:01:54 -- common/autotest_common.sh@10 -- # set +x 00:19:20.308 [2024-11-18 01:01:54.543785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:20.308 [2024-11-18 01:01:54.543993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.308 [2024-11-18 01:01:54.682767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.566 [2024-11-18 01:01:54.766791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.566 [2024-11-18 01:01:54.846336] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.134 01:01:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.134 01:01:55 -- common/autotest_common.sh@862 -- # return 0 00:19:21.134 01:01:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:21.393 [2024-11-18 01:01:55.620901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:21.393 [2024-11-18 01:01:55.621013] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:21.393 [2024-11-18 01:01:55.621025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:21.393 [2024-11-18 01:01:55.621045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:21.393 [2024-11-18 01:01:55.621052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:21.393 [2024-11-18 01:01:55.621103] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:21.393 [2024-11-18 01:01:55.621110] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:21.393 [2024-11-18 01:01:55.621146] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.393 01:01:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.652 01:01:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.652 "name": "Existed_Raid", 00:19:21.652 "uuid": "67c091f4-46c7-435f-9f08-fc8eafa0670e", 00:19:21.652 "strip_size_kb": 0, 00:19:21.652 "state": "configuring", 00:19:21.652 "raid_level": "raid1", 00:19:21.652 "superblock": true, 00:19:21.652 "num_base_bdevs": 4, 00:19:21.652 "num_base_bdevs_discovered": 0, 00:19:21.652 "num_base_bdevs_operational": 4, 00:19:21.652 "base_bdevs_list": [ 00:19:21.652 { 00:19:21.652 "name": "BaseBdev1", 00:19:21.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.652 "is_configured": false, 00:19:21.652 "data_offset": 0, 00:19:21.652 "data_size": 0 00:19:21.652 }, 00:19:21.652 { 00:19:21.652 "name": "BaseBdev2", 00:19:21.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.652 "is_configured": false, 00:19:21.652 "data_offset": 0, 00:19:21.652 "data_size": 0 00:19:21.652 }, 00:19:21.652 { 00:19:21.652 "name": "BaseBdev3", 00:19:21.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.652 "is_configured": false, 00:19:21.652 "data_offset": 0, 00:19:21.652 "data_size": 0 00:19:21.652 }, 00:19:21.652 { 00:19:21.652 "name": "BaseBdev4", 00:19:21.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.652 "is_configured": false, 00:19:21.652 "data_offset": 0, 00:19:21.652 "data_size": 0 00:19:21.652 } 00:19:21.652 ] 00:19:21.652 }' 00:19:21.652 01:01:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.652 01:01:55 -- common/autotest_common.sh@10 -- # set +x 00:19:22.221 01:01:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:22.221 [2024-11-18 01:01:56.600912] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:22.221 [2024-11-18 01:01:56.600978] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:22.480 01:01:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:22.480 [2024-11-18 01:01:56.845003] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.480 [2024-11-18 01:01:56.845094] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.480 [2024-11-18 01:01:56.845105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:22.480 [2024-11-18 01:01:56.845132] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:22.480 [2024-11-18 01:01:56.845139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:22.480 [2024-11-18 01:01:56.845158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:22.480 [2024-11-18 01:01:56.845164] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:22.480 [2024-11-18 01:01:56.845192] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:22.480 01:01:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:22.739 [2024-11-18 01:01:57.053064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.739 BaseBdev1 00:19:22.739 01:01:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:22.739 01:01:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:22.739 01:01:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:22.739 01:01:57 -- common/autotest_common.sh@899 -- # local i 00:19:22.739 01:01:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:22.739 01:01:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:22.739 01:01:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.998 01:01:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:23.257 [ 00:19:23.257 { 00:19:23.257 "name": "BaseBdev1", 00:19:23.257 "aliases": [ 00:19:23.257 "cc76c393-92e7-4cb3-9c35-c721f46e29d0" 00:19:23.257 ], 00:19:23.257 "product_name": "Malloc disk", 00:19:23.257 "block_size": 512, 00:19:23.257 "num_blocks": 65536, 00:19:23.257 "uuid": "cc76c393-92e7-4cb3-9c35-c721f46e29d0", 00:19:23.257 "assigned_rate_limits": { 00:19:23.257 "rw_ios_per_sec": 0, 00:19:23.257 "rw_mbytes_per_sec": 0, 00:19:23.257 "r_mbytes_per_sec": 0, 00:19:23.257 "w_mbytes_per_sec": 0 00:19:23.257 }, 00:19:23.257 "claimed": true, 00:19:23.257 "claim_type": "exclusive_write", 00:19:23.257 "zoned": false, 00:19:23.257 "supported_io_types": { 00:19:23.257 "read": true, 00:19:23.257 "write": true, 00:19:23.257 "unmap": true, 00:19:23.257 "write_zeroes": true, 00:19:23.257 "flush": true, 00:19:23.257 "reset": true, 00:19:23.257 "compare": false, 00:19:23.257 "compare_and_write": false, 00:19:23.257 "abort": true, 00:19:23.257 "nvme_admin": false, 00:19:23.257 "nvme_io": false 00:19:23.257 }, 00:19:23.257 "memory_domains": [ 00:19:23.257 { 00:19:23.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.257 "dma_device_type": 2 00:19:23.257 } 00:19:23.257 ], 00:19:23.257 "driver_specific": {} 00:19:23.257 } 00:19:23.257 ] 00:19:23.257 01:01:57 -- common/autotest_common.sh@905 -- # return 0 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.257 01:01:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.258 01:01:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.516 01:01:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.516 "name": "Existed_Raid", 00:19:23.516 "uuid": "6d351f36-3d0a-40dd-a174-6558ee8b9e86", 00:19:23.516 "strip_size_kb": 0, 00:19:23.516 "state": "configuring", 00:19:23.516 "raid_level": "raid1", 00:19:23.516 "superblock": true, 00:19:23.516 "num_base_bdevs": 4, 00:19:23.516 "num_base_bdevs_discovered": 1, 00:19:23.516 "num_base_bdevs_operational": 4, 00:19:23.516 "base_bdevs_list": [ 00:19:23.516 { 00:19:23.516 "name": "BaseBdev1", 00:19:23.516 "uuid": "cc76c393-92e7-4cb3-9c35-c721f46e29d0", 00:19:23.516 "is_configured": true, 00:19:23.516 "data_offset": 2048, 00:19:23.516 "data_size": 63488 00:19:23.516 }, 00:19:23.516 { 00:19:23.516 "name": "BaseBdev2", 00:19:23.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.516 "is_configured": false, 00:19:23.516 "data_offset": 0, 00:19:23.516 "data_size": 0 00:19:23.516 }, 00:19:23.516 { 00:19:23.516 "name": "BaseBdev3", 00:19:23.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.516 "is_configured": false, 00:19:23.516 "data_offset": 0, 00:19:23.516 "data_size": 0 00:19:23.516 }, 00:19:23.516 { 00:19:23.516 "name": "BaseBdev4", 00:19:23.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.517 "is_configured": false, 00:19:23.517 "data_offset": 0, 00:19:23.517 "data_size": 0 00:19:23.517 } 00:19:23.517 ] 00:19:23.517 }' 00:19:23.517 01:01:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.517 01:01:57 -- common/autotest_common.sh@10 -- # set +x 00:19:24.084 01:01:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:24.343 [2024-11-18 01:01:58.645357] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.343 [2024-11-18 01:01:58.645442] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:24.343 01:01:58 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:24.343 01:01:58 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:24.602 01:01:58 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:24.861 BaseBdev1 00:19:24.861 01:01:59 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:24.861 01:01:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:24.861 01:01:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:24.861 01:01:59 -- common/autotest_common.sh@899 -- # local i 00:19:24.861 01:01:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:24.861 01:01:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:24.861 01:01:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.119 01:01:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.377 [ 00:19:25.377 { 00:19:25.377 "name": "BaseBdev1", 00:19:25.377 "aliases": [ 00:19:25.378 "cf4e5e9a-5786-4efa-968d-7fe0ec55bab8" 00:19:25.378 ], 00:19:25.378 "product_name": "Malloc disk", 00:19:25.378 "block_size": 512, 00:19:25.378 "num_blocks": 65536, 00:19:25.378 "uuid": "cf4e5e9a-5786-4efa-968d-7fe0ec55bab8", 00:19:25.378 "assigned_rate_limits": { 00:19:25.378 "rw_ios_per_sec": 0, 00:19:25.378 "rw_mbytes_per_sec": 0, 00:19:25.378 "r_mbytes_per_sec": 0, 00:19:25.378 "w_mbytes_per_sec": 0 00:19:25.378 }, 00:19:25.378 "claimed": false, 00:19:25.378 "zoned": false, 00:19:25.378 "supported_io_types": { 00:19:25.378 "read": true, 00:19:25.378 "write": true, 00:19:25.378 "unmap": true, 00:19:25.378 "write_zeroes": true, 00:19:25.378 "flush": true, 00:19:25.378 "reset": true, 00:19:25.378 "compare": false, 00:19:25.378 "compare_and_write": false, 00:19:25.378 "abort": true, 00:19:25.378 "nvme_admin": false, 00:19:25.378 "nvme_io": false 00:19:25.378 }, 00:19:25.378 "memory_domains": [ 00:19:25.378 { 00:19:25.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.378 "dma_device_type": 2 00:19:25.378 } 00:19:25.378 ], 00:19:25.378 "driver_specific": {} 00:19:25.378 } 00:19:25.378 ] 00:19:25.378 01:01:59 -- common/autotest_common.sh@905 -- # return 0 00:19:25.378 01:01:59 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:25.378 [2024-11-18 01:01:59.770614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.378 [2024-11-18 01:01:59.773212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.378 [2024-11-18 01:01:59.773304] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.378 [2024-11-18 01:01:59.773314] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:25.378 [2024-11-18 01:01:59.773340] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:25.378 [2024-11-18 01:01:59.773348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:25.378 [2024-11-18 01:01:59.773366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.636 01:01:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.895 01:02:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.895 "name": "Existed_Raid", 00:19:25.895 "uuid": "59566002-b54f-423d-a001-f2b396726ac5", 00:19:25.895 "strip_size_kb": 0, 00:19:25.895 "state": "configuring", 00:19:25.895 "raid_level": "raid1", 00:19:25.895 "superblock": true, 00:19:25.895 "num_base_bdevs": 4, 00:19:25.895 "num_base_bdevs_discovered": 1, 00:19:25.895 "num_base_bdevs_operational": 4, 00:19:25.895 "base_bdevs_list": [ 00:19:25.895 { 00:19:25.895 "name": "BaseBdev1", 00:19:25.895 "uuid": "cf4e5e9a-5786-4efa-968d-7fe0ec55bab8", 00:19:25.895 "is_configured": true, 00:19:25.895 "data_offset": 2048, 00:19:25.895 "data_size": 63488 00:19:25.895 }, 00:19:25.895 { 00:19:25.895 "name": "BaseBdev2", 00:19:25.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.895 "is_configured": false, 00:19:25.895 "data_offset": 0, 00:19:25.895 "data_size": 0 00:19:25.895 }, 00:19:25.895 { 00:19:25.895 "name": "BaseBdev3", 00:19:25.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.895 "is_configured": false, 00:19:25.895 "data_offset": 0, 00:19:25.895 "data_size": 0 00:19:25.895 }, 00:19:25.895 { 00:19:25.895 "name": "BaseBdev4", 00:19:25.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.895 "is_configured": false, 00:19:25.895 "data_offset": 0, 00:19:25.895 "data_size": 0 00:19:25.895 } 00:19:25.895 ] 00:19:25.895 }' 00:19:25.895 01:02:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.895 01:02:00 -- common/autotest_common.sh@10 -- # set +x 00:19:26.461 01:02:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:26.461 [2024-11-18 01:02:00.820578] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.461 BaseBdev2 00:19:26.461 01:02:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:26.461 01:02:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:26.461 01:02:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:26.461 01:02:00 -- common/autotest_common.sh@899 -- # local i 00:19:26.461 01:02:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:26.461 01:02:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:26.461 01:02:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:26.719 01:02:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:26.977 [ 00:19:26.977 { 00:19:26.977 "name": "BaseBdev2", 00:19:26.977 "aliases": [ 00:19:26.977 "131fb038-5766-402b-b881-04240309d057" 00:19:26.977 ], 00:19:26.977 "product_name": "Malloc disk", 00:19:26.977 "block_size": 512, 00:19:26.977 "num_blocks": 65536, 00:19:26.977 "uuid": "131fb038-5766-402b-b881-04240309d057", 00:19:26.977 "assigned_rate_limits": { 00:19:26.977 "rw_ios_per_sec": 0, 00:19:26.977 "rw_mbytes_per_sec": 0, 00:19:26.977 "r_mbytes_per_sec": 0, 00:19:26.977 "w_mbytes_per_sec": 0 00:19:26.977 }, 00:19:26.977 "claimed": true, 00:19:26.977 "claim_type": "exclusive_write", 00:19:26.977 "zoned": false, 00:19:26.977 "supported_io_types": { 00:19:26.977 "read": true, 00:19:26.977 "write": true, 00:19:26.977 "unmap": true, 00:19:26.977 "write_zeroes": true, 00:19:26.977 "flush": true, 00:19:26.977 "reset": true, 00:19:26.977 "compare": false, 00:19:26.977 "compare_and_write": false, 00:19:26.977 "abort": true, 00:19:26.977 "nvme_admin": false, 00:19:26.977 "nvme_io": false 00:19:26.977 }, 00:19:26.977 "memory_domains": [ 00:19:26.977 { 00:19:26.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.977 "dma_device_type": 2 00:19:26.977 } 00:19:26.977 ], 00:19:26.977 "driver_specific": {} 00:19:26.977 } 00:19:26.977 ] 00:19:26.977 01:02:01 -- common/autotest_common.sh@905 -- # return 0 00:19:26.977 01:02:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:26.977 01:02:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:26.977 01:02:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:26.977 01:02:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:26.977 01:02:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.978 01:02:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.236 01:02:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.236 "name": "Existed_Raid", 00:19:27.236 "uuid": "59566002-b54f-423d-a001-f2b396726ac5", 00:19:27.236 "strip_size_kb": 0, 00:19:27.236 "state": "configuring", 00:19:27.236 "raid_level": "raid1", 00:19:27.236 "superblock": true, 00:19:27.236 "num_base_bdevs": 4, 00:19:27.236 "num_base_bdevs_discovered": 2, 00:19:27.236 "num_base_bdevs_operational": 4, 00:19:27.236 "base_bdevs_list": [ 00:19:27.236 { 00:19:27.236 "name": "BaseBdev1", 00:19:27.236 "uuid": "cf4e5e9a-5786-4efa-968d-7fe0ec55bab8", 00:19:27.236 "is_configured": true, 00:19:27.236 "data_offset": 2048, 00:19:27.236 "data_size": 63488 00:19:27.236 }, 00:19:27.236 { 00:19:27.236 "name": "BaseBdev2", 00:19:27.236 "uuid": "131fb038-5766-402b-b881-04240309d057", 00:19:27.236 "is_configured": true, 00:19:27.236 "data_offset": 2048, 00:19:27.236 "data_size": 63488 00:19:27.236 }, 00:19:27.236 { 00:19:27.236 "name": "BaseBdev3", 00:19:27.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.236 "is_configured": false, 00:19:27.236 "data_offset": 0, 00:19:27.236 "data_size": 0 00:19:27.236 }, 00:19:27.236 { 00:19:27.236 "name": "BaseBdev4", 00:19:27.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.236 "is_configured": false, 00:19:27.236 "data_offset": 0, 00:19:27.236 "data_size": 0 00:19:27.236 } 00:19:27.236 ] 00:19:27.236 }' 00:19:27.236 01:02:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.236 01:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:27.803 01:02:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:28.062 [2024-11-18 01:02:02.206532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.062 BaseBdev3 00:19:28.062 01:02:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:28.062 01:02:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:28.062 01:02:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:28.062 01:02:02 -- common/autotest_common.sh@899 -- # local i 00:19:28.062 01:02:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:28.062 01:02:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:28.062 01:02:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.321 01:02:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:28.321 [ 00:19:28.321 { 00:19:28.321 "name": "BaseBdev3", 00:19:28.321 "aliases": [ 00:19:28.321 "3e01c884-4e2c-47a3-aaf7-3d2389fef7a2" 00:19:28.321 ], 00:19:28.321 "product_name": "Malloc disk", 00:19:28.321 "block_size": 512, 00:19:28.321 "num_blocks": 65536, 00:19:28.321 "uuid": "3e01c884-4e2c-47a3-aaf7-3d2389fef7a2", 00:19:28.321 "assigned_rate_limits": { 00:19:28.321 "rw_ios_per_sec": 0, 00:19:28.321 "rw_mbytes_per_sec": 0, 00:19:28.321 "r_mbytes_per_sec": 0, 00:19:28.321 "w_mbytes_per_sec": 0 00:19:28.321 }, 00:19:28.321 "claimed": true, 00:19:28.321 "claim_type": "exclusive_write", 00:19:28.321 "zoned": false, 00:19:28.321 "supported_io_types": { 00:19:28.321 "read": true, 00:19:28.321 "write": true, 00:19:28.321 "unmap": true, 00:19:28.321 "write_zeroes": true, 00:19:28.321 "flush": true, 00:19:28.321 "reset": true, 00:19:28.321 "compare": false, 00:19:28.321 "compare_and_write": false, 00:19:28.321 "abort": true, 00:19:28.321 "nvme_admin": false, 00:19:28.321 "nvme_io": false 00:19:28.321 }, 00:19:28.321 "memory_domains": [ 00:19:28.321 { 00:19:28.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.321 "dma_device_type": 2 00:19:28.321 } 00:19:28.321 ], 00:19:28.321 "driver_specific": {} 00:19:28.321 } 00:19:28.321 ] 00:19:28.321 01:02:02 -- common/autotest_common.sh@905 -- # return 0 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.321 01:02:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.580 01:02:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.580 "name": "Existed_Raid", 00:19:28.580 "uuid": "59566002-b54f-423d-a001-f2b396726ac5", 00:19:28.580 "strip_size_kb": 0, 00:19:28.580 "state": "configuring", 00:19:28.580 "raid_level": "raid1", 00:19:28.580 "superblock": true, 00:19:28.580 "num_base_bdevs": 4, 00:19:28.580 "num_base_bdevs_discovered": 3, 00:19:28.580 "num_base_bdevs_operational": 4, 00:19:28.580 "base_bdevs_list": [ 00:19:28.580 { 00:19:28.580 "name": "BaseBdev1", 00:19:28.580 "uuid": "cf4e5e9a-5786-4efa-968d-7fe0ec55bab8", 00:19:28.580 "is_configured": true, 00:19:28.580 "data_offset": 2048, 00:19:28.580 "data_size": 63488 00:19:28.580 }, 00:19:28.580 { 00:19:28.580 "name": "BaseBdev2", 00:19:28.580 "uuid": "131fb038-5766-402b-b881-04240309d057", 00:19:28.580 "is_configured": true, 00:19:28.580 "data_offset": 2048, 00:19:28.580 "data_size": 63488 00:19:28.580 }, 00:19:28.580 { 00:19:28.580 "name": "BaseBdev3", 00:19:28.580 "uuid": "3e01c884-4e2c-47a3-aaf7-3d2389fef7a2", 00:19:28.580 "is_configured": true, 00:19:28.580 "data_offset": 2048, 00:19:28.580 "data_size": 63488 00:19:28.580 }, 00:19:28.580 { 00:19:28.580 "name": "BaseBdev4", 00:19:28.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.580 "is_configured": false, 00:19:28.580 "data_offset": 0, 00:19:28.580 "data_size": 0 00:19:28.580 } 00:19:28.580 ] 00:19:28.580 }' 00:19:28.580 01:02:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.580 01:02:02 -- common/autotest_common.sh@10 -- # set +x 00:19:29.146 01:02:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:29.404 [2024-11-18 01:02:03.672364] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:29.404 [2024-11-18 01:02:03.672642] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:19:29.404 [2024-11-18 01:02:03.672655] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:29.404 [2024-11-18 01:02:03.672784] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:19:29.404 [2024-11-18 01:02:03.673208] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:19:29.404 [2024-11-18 01:02:03.673227] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:19:29.404 [2024-11-18 01:02:03.673400] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.404 BaseBdev4 00:19:29.404 01:02:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:29.404 01:02:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:29.404 01:02:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:29.404 01:02:03 -- common/autotest_common.sh@899 -- # local i 00:19:29.404 01:02:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:29.404 01:02:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:29.404 01:02:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.662 01:02:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:29.662 [ 00:19:29.662 { 00:19:29.662 "name": "BaseBdev4", 00:19:29.662 "aliases": [ 00:19:29.662 "30d0ec34-93c8-4f6e-9c2b-4fd3b43436f5" 00:19:29.662 ], 00:19:29.662 "product_name": "Malloc disk", 00:19:29.662 "block_size": 512, 00:19:29.662 "num_blocks": 65536, 00:19:29.662 "uuid": "30d0ec34-93c8-4f6e-9c2b-4fd3b43436f5", 00:19:29.662 "assigned_rate_limits": { 00:19:29.662 "rw_ios_per_sec": 0, 00:19:29.662 "rw_mbytes_per_sec": 0, 00:19:29.662 "r_mbytes_per_sec": 0, 00:19:29.662 "w_mbytes_per_sec": 0 00:19:29.662 }, 00:19:29.662 "claimed": true, 00:19:29.662 "claim_type": "exclusive_write", 00:19:29.662 "zoned": false, 00:19:29.662 "supported_io_types": { 00:19:29.662 "read": true, 00:19:29.662 "write": true, 00:19:29.662 "unmap": true, 00:19:29.662 "write_zeroes": true, 00:19:29.662 "flush": true, 00:19:29.662 "reset": true, 00:19:29.662 "compare": false, 00:19:29.662 "compare_and_write": false, 00:19:29.662 "abort": true, 00:19:29.662 "nvme_admin": false, 00:19:29.662 "nvme_io": false 00:19:29.662 }, 00:19:29.662 "memory_domains": [ 00:19:29.662 { 00:19:29.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.662 "dma_device_type": 2 00:19:29.662 } 00:19:29.662 ], 00:19:29.662 "driver_specific": {} 00:19:29.662 } 00:19:29.662 ] 00:19:29.920 01:02:04 -- common/autotest_common.sh@905 -- # return 0 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.920 01:02:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.921 "name": "Existed_Raid", 00:19:29.921 "uuid": "59566002-b54f-423d-a001-f2b396726ac5", 00:19:29.921 "strip_size_kb": 0, 00:19:29.921 "state": "online", 00:19:29.921 "raid_level": "raid1", 00:19:29.921 "superblock": true, 00:19:29.921 "num_base_bdevs": 4, 00:19:29.921 "num_base_bdevs_discovered": 4, 00:19:29.921 "num_base_bdevs_operational": 4, 00:19:29.921 "base_bdevs_list": [ 00:19:29.921 { 00:19:29.921 "name": "BaseBdev1", 00:19:29.921 "uuid": "cf4e5e9a-5786-4efa-968d-7fe0ec55bab8", 00:19:29.921 "is_configured": true, 00:19:29.921 "data_offset": 2048, 00:19:29.921 "data_size": 63488 00:19:29.921 }, 00:19:29.921 { 00:19:29.921 "name": "BaseBdev2", 00:19:29.921 "uuid": "131fb038-5766-402b-b881-04240309d057", 00:19:29.921 "is_configured": true, 00:19:29.921 "data_offset": 2048, 00:19:29.921 "data_size": 63488 00:19:29.921 }, 00:19:29.921 { 00:19:29.921 "name": "BaseBdev3", 00:19:29.921 "uuid": "3e01c884-4e2c-47a3-aaf7-3d2389fef7a2", 00:19:29.921 "is_configured": true, 00:19:29.921 "data_offset": 2048, 00:19:29.921 "data_size": 63488 00:19:29.921 }, 00:19:29.921 { 00:19:29.921 "name": "BaseBdev4", 00:19:29.921 "uuid": "30d0ec34-93c8-4f6e-9c2b-4fd3b43436f5", 00:19:29.921 "is_configured": true, 00:19:29.921 "data_offset": 2048, 00:19:29.921 "data_size": 63488 00:19:29.921 } 00:19:29.921 ] 00:19:29.921 }' 00:19:29.921 01:02:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.921 01:02:04 -- common/autotest_common.sh@10 -- # set +x 00:19:30.487 01:02:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:30.745 [2024-11-18 01:02:04.976808] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.745 01:02:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.003 01:02:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.003 "name": "Existed_Raid", 00:19:31.003 "uuid": "59566002-b54f-423d-a001-f2b396726ac5", 00:19:31.003 "strip_size_kb": 0, 00:19:31.003 "state": "online", 00:19:31.003 "raid_level": "raid1", 00:19:31.003 "superblock": true, 00:19:31.003 "num_base_bdevs": 4, 00:19:31.003 "num_base_bdevs_discovered": 3, 00:19:31.003 "num_base_bdevs_operational": 3, 00:19:31.003 "base_bdevs_list": [ 00:19:31.003 { 00:19:31.003 "name": null, 00:19:31.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.003 "is_configured": false, 00:19:31.003 "data_offset": 2048, 00:19:31.003 "data_size": 63488 00:19:31.003 }, 00:19:31.003 { 00:19:31.003 "name": "BaseBdev2", 00:19:31.003 "uuid": "131fb038-5766-402b-b881-04240309d057", 00:19:31.003 "is_configured": true, 00:19:31.003 "data_offset": 2048, 00:19:31.003 "data_size": 63488 00:19:31.003 }, 00:19:31.003 { 00:19:31.003 "name": "BaseBdev3", 00:19:31.003 "uuid": "3e01c884-4e2c-47a3-aaf7-3d2389fef7a2", 00:19:31.003 "is_configured": true, 00:19:31.003 "data_offset": 2048, 00:19:31.003 "data_size": 63488 00:19:31.003 }, 00:19:31.003 { 00:19:31.003 "name": "BaseBdev4", 00:19:31.003 "uuid": "30d0ec34-93c8-4f6e-9c2b-4fd3b43436f5", 00:19:31.003 "is_configured": true, 00:19:31.003 "data_offset": 2048, 00:19:31.003 "data_size": 63488 00:19:31.003 } 00:19:31.003 ] 00:19:31.003 }' 00:19:31.003 01:02:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.003 01:02:05 -- common/autotest_common.sh@10 -- # set +x 00:19:31.570 01:02:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:31.570 01:02:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:31.570 01:02:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.570 01:02:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:31.570 01:02:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:31.570 01:02:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:31.570 01:02:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:31.828 [2024-11-18 01:02:06.128678] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:31.828 01:02:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:31.828 01:02:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:31.828 01:02:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.828 01:02:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:32.086 01:02:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:32.086 01:02:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:32.086 01:02:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:32.344 [2024-11-18 01:02:06.601943] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:32.344 01:02:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:32.344 01:02:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:32.344 01:02:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.344 01:02:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:32.604 01:02:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:32.604 01:02:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:32.604 01:02:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:32.863 [2024-11-18 01:02:07.058941] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:32.863 [2024-11-18 01:02:07.058997] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.863 [2024-11-18 01:02:07.059083] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.863 [2024-11-18 01:02:07.080069] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.863 [2024-11-18 01:02:07.080104] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:19:32.863 01:02:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:32.863 01:02:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:32.863 01:02:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.863 01:02:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.121 01:02:07 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:33.121 01:02:07 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:33.121 01:02:07 -- bdev/bdev_raid.sh@287 -- # killprocess 131956 00:19:33.121 01:02:07 -- common/autotest_common.sh@936 -- # '[' -z 131956 ']' 00:19:33.121 01:02:07 -- common/autotest_common.sh@940 -- # kill -0 131956 00:19:33.121 01:02:07 -- common/autotest_common.sh@941 -- # uname 00:19:33.121 01:02:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:33.121 01:02:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131956 00:19:33.121 01:02:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:33.121 01:02:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:33.121 01:02:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131956' 00:19:33.121 killing process with pid 131956 00:19:33.121 01:02:07 -- common/autotest_common.sh@955 -- # kill 131956 00:19:33.121 01:02:07 -- common/autotest_common.sh@960 -- # wait 131956 00:19:33.121 [2024-11-18 01:02:07.399805] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.121 [2024-11-18 01:02:07.399919] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:33.688 ************************************ 00:19:33.688 END TEST raid_state_function_test_sb 00:19:33.688 ************************************ 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:33.688 00:19:33.688 real 0m13.324s 00:19:33.688 user 0m23.759s 00:19:33.688 sys 0m2.273s 00:19:33.688 01:02:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:33.688 01:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:33.688 01:02:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:33.688 01:02:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.688 01:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:33.688 ************************************ 00:19:33.688 START TEST raid_superblock_test 00:19:33.688 ************************************ 00:19:33.688 01:02:07 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@357 -- # raid_pid=132394 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:33.688 01:02:07 -- bdev/bdev_raid.sh@358 -- # waitforlisten 132394 /var/tmp/spdk-raid.sock 00:19:33.688 01:02:07 -- common/autotest_common.sh@829 -- # '[' -z 132394 ']' 00:19:33.688 01:02:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:33.688 01:02:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:33.688 01:02:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:33.688 01:02:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.688 01:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:33.689 [2024-11-18 01:02:07.943573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:33.689 [2024-11-18 01:02:07.943785] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132394 ] 00:19:33.689 [2024-11-18 01:02:08.087410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.947 [2024-11-18 01:02:08.173210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.947 [2024-11-18 01:02:08.251718] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.513 01:02:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.513 01:02:08 -- common/autotest_common.sh@862 -- # return 0 00:19:34.513 01:02:08 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.514 01:02:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:34.772 malloc1 00:19:34.772 01:02:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:35.029 [2024-11-18 01:02:09.321118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:35.029 [2024-11-18 01:02:09.321272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.029 [2024-11-18 01:02:09.321322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:35.029 [2024-11-18 01:02:09.321398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.029 [2024-11-18 01:02:09.324370] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.029 [2024-11-18 01:02:09.324436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:35.029 pt1 00:19:35.029 01:02:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:35.029 01:02:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:35.029 01:02:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:35.029 01:02:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:35.029 01:02:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:35.029 01:02:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.029 01:02:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.030 01:02:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.030 01:02:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:35.287 malloc2 00:19:35.287 01:02:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.545 [2024-11-18 01:02:09.773030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.546 [2024-11-18 01:02:09.773150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.546 [2024-11-18 01:02:09.773195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:35.546 [2024-11-18 01:02:09.773246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.546 [2024-11-18 01:02:09.775995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.546 [2024-11-18 01:02:09.776052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.546 pt2 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.546 01:02:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:35.804 malloc3 00:19:35.804 01:02:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:35.804 [2024-11-18 01:02:10.198638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:35.804 [2024-11-18 01:02:10.198749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.804 [2024-11-18 01:02:10.198798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:35.804 [2024-11-18 01:02:10.198843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.804 [2024-11-18 01:02:10.201646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.804 [2024-11-18 01:02:10.201704] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:35.804 pt3 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:36.063 malloc4 00:19:36.063 01:02:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:36.322 [2024-11-18 01:02:10.666444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:36.322 [2024-11-18 01:02:10.666571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.322 [2024-11-18 01:02:10.666613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:36.322 [2024-11-18 01:02:10.666670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.322 [2024-11-18 01:02:10.669471] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.322 [2024-11-18 01:02:10.669532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:36.322 pt4 00:19:36.322 01:02:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:36.322 01:02:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:36.322 01:02:10 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:36.581 [2024-11-18 01:02:10.858597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.581 [2024-11-18 01:02:10.861072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.581 [2024-11-18 01:02:10.861149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:36.581 [2024-11-18 01:02:10.861190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:36.581 [2024-11-18 01:02:10.861419] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:19:36.581 [2024-11-18 01:02:10.861429] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:36.581 [2024-11-18 01:02:10.861585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:36.581 [2024-11-18 01:02:10.862029] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:19:36.581 [2024-11-18 01:02:10.862048] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:19:36.581 [2024-11-18 01:02:10.862231] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.581 01:02:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.840 01:02:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.840 "name": "raid_bdev1", 00:19:36.840 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:36.840 "strip_size_kb": 0, 00:19:36.840 "state": "online", 00:19:36.840 "raid_level": "raid1", 00:19:36.840 "superblock": true, 00:19:36.840 "num_base_bdevs": 4, 00:19:36.840 "num_base_bdevs_discovered": 4, 00:19:36.840 "num_base_bdevs_operational": 4, 00:19:36.840 "base_bdevs_list": [ 00:19:36.840 { 00:19:36.840 "name": "pt1", 00:19:36.840 "uuid": "63dbc414-2baf-5b50-817f-1a751a5e2575", 00:19:36.840 "is_configured": true, 00:19:36.840 "data_offset": 2048, 00:19:36.840 "data_size": 63488 00:19:36.840 }, 00:19:36.840 { 00:19:36.840 "name": "pt2", 00:19:36.840 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:36.840 "is_configured": true, 00:19:36.840 "data_offset": 2048, 00:19:36.840 "data_size": 63488 00:19:36.840 }, 00:19:36.840 { 00:19:36.840 "name": "pt3", 00:19:36.840 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:36.840 "is_configured": true, 00:19:36.840 "data_offset": 2048, 00:19:36.840 "data_size": 63488 00:19:36.840 }, 00:19:36.840 { 00:19:36.840 "name": "pt4", 00:19:36.840 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:36.840 "is_configured": true, 00:19:36.840 "data_offset": 2048, 00:19:36.840 "data_size": 63488 00:19:36.840 } 00:19:36.840 ] 00:19:36.840 }' 00:19:36.840 01:02:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.840 01:02:11 -- common/autotest_common.sh@10 -- # set +x 00:19:37.408 01:02:11 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:37.408 01:02:11 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:37.666 [2024-11-18 01:02:11.855253] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.666 01:02:11 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc 00:19:37.666 01:02:11 -- bdev/bdev_raid.sh@380 -- # '[' -z 71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc ']' 00:19:37.666 01:02:11 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:37.924 [2024-11-18 01:02:12.099011] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.924 [2024-11-18 01:02:12.099050] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.924 [2024-11-18 01:02:12.099138] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.924 [2024-11-18 01:02:12.099255] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.924 [2024-11-18 01:02:12.099265] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:19:37.924 01:02:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.924 01:02:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:37.924 01:02:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:37.924 01:02:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:37.924 01:02:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.924 01:02:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:38.184 01:02:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.184 01:02:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:38.442 01:02:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.442 01:02:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:38.701 01:02:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.701 01:02:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:38.960 01:02:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:38.960 01:02:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:39.219 01:02:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:39.219 01:02:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:39.219 01:02:13 -- common/autotest_common.sh@650 -- # local es=0 00:19:39.219 01:02:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:39.219 01:02:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.219 01:02:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.219 01:02:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.219 01:02:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.219 01:02:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.219 01:02:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.219 01:02:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.219 01:02:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:39.219 01:02:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:39.219 [2024-11-18 01:02:13.547259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:39.219 [2024-11-18 01:02:13.549726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:39.219 [2024-11-18 01:02:13.549779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:39.219 [2024-11-18 01:02:13.549809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:39.219 [2024-11-18 01:02:13.549859] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:39.219 [2024-11-18 01:02:13.549954] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:39.219 [2024-11-18 01:02:13.549984] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:39.219 [2024-11-18 01:02:13.550033] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:39.219 [2024-11-18 01:02:13.550083] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.219 [2024-11-18 01:02:13.550094] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:19:39.219 request: 00:19:39.219 { 00:19:39.219 "name": "raid_bdev1", 00:19:39.219 "raid_level": "raid1", 00:19:39.219 "base_bdevs": [ 00:19:39.219 "malloc1", 00:19:39.219 "malloc2", 00:19:39.219 "malloc3", 00:19:39.219 "malloc4" 00:19:39.219 ], 00:19:39.219 "superblock": false, 00:19:39.219 "method": "bdev_raid_create", 00:19:39.219 "req_id": 1 00:19:39.219 } 00:19:39.219 Got JSON-RPC error response 00:19:39.219 response: 00:19:39.219 { 00:19:39.219 "code": -17, 00:19:39.219 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:39.219 } 00:19:39.219 01:02:13 -- common/autotest_common.sh@653 -- # es=1 00:19:39.219 01:02:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.219 01:02:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.219 01:02:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.219 01:02:13 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.219 01:02:13 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:39.478 01:02:13 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:39.478 01:02:13 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:39.478 01:02:13 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.737 [2024-11-18 01:02:13.931237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.737 [2024-11-18 01:02:13.931358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.737 [2024-11-18 01:02:13.931398] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:39.737 [2024-11-18 01:02:13.931429] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.737 [2024-11-18 01:02:13.934191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.737 [2024-11-18 01:02:13.934265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.737 [2024-11-18 01:02:13.934362] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:39.737 [2024-11-18 01:02:13.934437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:39.737 pt1 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.737 01:02:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.996 01:02:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.996 "name": "raid_bdev1", 00:19:39.996 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:39.996 "strip_size_kb": 0, 00:19:39.996 "state": "configuring", 00:19:39.996 "raid_level": "raid1", 00:19:39.996 "superblock": true, 00:19:39.996 "num_base_bdevs": 4, 00:19:39.996 "num_base_bdevs_discovered": 1, 00:19:39.996 "num_base_bdevs_operational": 4, 00:19:39.996 "base_bdevs_list": [ 00:19:39.996 { 00:19:39.996 "name": "pt1", 00:19:39.996 "uuid": "63dbc414-2baf-5b50-817f-1a751a5e2575", 00:19:39.996 "is_configured": true, 00:19:39.996 "data_offset": 2048, 00:19:39.996 "data_size": 63488 00:19:39.996 }, 00:19:39.996 { 00:19:39.996 "name": null, 00:19:39.996 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:39.996 "is_configured": false, 00:19:39.996 "data_offset": 2048, 00:19:39.996 "data_size": 63488 00:19:39.996 }, 00:19:39.996 { 00:19:39.996 "name": null, 00:19:39.996 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:39.996 "is_configured": false, 00:19:39.996 "data_offset": 2048, 00:19:39.996 "data_size": 63488 00:19:39.996 }, 00:19:39.996 { 00:19:39.996 "name": null, 00:19:39.996 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:39.996 "is_configured": false, 00:19:39.996 "data_offset": 2048, 00:19:39.996 "data_size": 63488 00:19:39.996 } 00:19:39.996 ] 00:19:39.996 }' 00:19:39.996 01:02:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.996 01:02:14 -- common/autotest_common.sh@10 -- # set +x 00:19:40.564 01:02:14 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:40.564 01:02:14 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.822 [2024-11-18 01:02:15.067457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.822 [2024-11-18 01:02:15.067562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.822 [2024-11-18 01:02:15.067618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:40.822 [2024-11-18 01:02:15.067641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.822 [2024-11-18 01:02:15.068120] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.822 [2024-11-18 01:02:15.068161] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.822 [2024-11-18 01:02:15.068257] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:40.822 [2024-11-18 01:02:15.068281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.822 pt2 00:19:40.822 01:02:15 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:41.081 [2024-11-18 01:02:15.347547] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.081 01:02:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.340 01:02:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.340 "name": "raid_bdev1", 00:19:41.340 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:41.340 "strip_size_kb": 0, 00:19:41.340 "state": "configuring", 00:19:41.340 "raid_level": "raid1", 00:19:41.340 "superblock": true, 00:19:41.340 "num_base_bdevs": 4, 00:19:41.340 "num_base_bdevs_discovered": 1, 00:19:41.340 "num_base_bdevs_operational": 4, 00:19:41.340 "base_bdevs_list": [ 00:19:41.340 { 00:19:41.340 "name": "pt1", 00:19:41.340 "uuid": "63dbc414-2baf-5b50-817f-1a751a5e2575", 00:19:41.340 "is_configured": true, 00:19:41.340 "data_offset": 2048, 00:19:41.340 "data_size": 63488 00:19:41.340 }, 00:19:41.340 { 00:19:41.340 "name": null, 00:19:41.340 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:41.340 "is_configured": false, 00:19:41.340 "data_offset": 2048, 00:19:41.340 "data_size": 63488 00:19:41.340 }, 00:19:41.340 { 00:19:41.340 "name": null, 00:19:41.340 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:41.340 "is_configured": false, 00:19:41.340 "data_offset": 2048, 00:19:41.340 "data_size": 63488 00:19:41.340 }, 00:19:41.340 { 00:19:41.340 "name": null, 00:19:41.340 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:41.340 "is_configured": false, 00:19:41.340 "data_offset": 2048, 00:19:41.340 "data_size": 63488 00:19:41.340 } 00:19:41.340 ] 00:19:41.340 }' 00:19:41.340 01:02:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.340 01:02:15 -- common/autotest_common.sh@10 -- # set +x 00:19:41.907 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:41.907 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:41.907 01:02:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:42.165 [2024-11-18 01:02:16.391698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:42.165 [2024-11-18 01:02:16.391815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.165 [2024-11-18 01:02:16.391861] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:42.165 [2024-11-18 01:02:16.391889] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.165 [2024-11-18 01:02:16.392390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.165 [2024-11-18 01:02:16.392455] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:42.165 [2024-11-18 01:02:16.392544] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:42.165 [2024-11-18 01:02:16.392566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.165 pt2 00:19:42.166 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.166 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.166 01:02:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:42.424 [2024-11-18 01:02:16.655791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:42.425 [2024-11-18 01:02:16.655910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.425 [2024-11-18 01:02:16.655948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:42.425 [2024-11-18 01:02:16.655979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.425 [2024-11-18 01:02:16.656465] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.425 [2024-11-18 01:02:16.656526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:42.425 [2024-11-18 01:02:16.656613] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:42.425 [2024-11-18 01:02:16.656635] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:42.425 pt3 00:19:42.425 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.425 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.425 01:02:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:42.684 [2024-11-18 01:02:16.847800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:42.684 [2024-11-18 01:02:16.847902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.684 [2024-11-18 01:02:16.847942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:42.684 [2024-11-18 01:02:16.847972] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.684 [2024-11-18 01:02:16.848445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.684 [2024-11-18 01:02:16.848505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:42.684 [2024-11-18 01:02:16.848588] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:42.684 [2024-11-18 01:02:16.848610] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:42.684 [2024-11-18 01:02:16.848778] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:42.684 [2024-11-18 01:02:16.848787] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:42.684 [2024-11-18 01:02:16.848870] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:19:42.684 [2024-11-18 01:02:16.849204] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:42.684 [2024-11-18 01:02:16.849223] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:42.684 [2024-11-18 01:02:16.849321] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.684 pt4 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.684 01:02:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.684 01:02:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.684 "name": "raid_bdev1", 00:19:42.684 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:42.684 "strip_size_kb": 0, 00:19:42.684 "state": "online", 00:19:42.684 "raid_level": "raid1", 00:19:42.684 "superblock": true, 00:19:42.684 "num_base_bdevs": 4, 00:19:42.684 "num_base_bdevs_discovered": 4, 00:19:42.684 "num_base_bdevs_operational": 4, 00:19:42.684 "base_bdevs_list": [ 00:19:42.684 { 00:19:42.684 "name": "pt1", 00:19:42.684 "uuid": "63dbc414-2baf-5b50-817f-1a751a5e2575", 00:19:42.684 "is_configured": true, 00:19:42.684 "data_offset": 2048, 00:19:42.684 "data_size": 63488 00:19:42.684 }, 00:19:42.684 { 00:19:42.684 "name": "pt2", 00:19:42.684 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:42.684 "is_configured": true, 00:19:42.684 "data_offset": 2048, 00:19:42.684 "data_size": 63488 00:19:42.684 }, 00:19:42.684 { 00:19:42.684 "name": "pt3", 00:19:42.684 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:42.684 "is_configured": true, 00:19:42.684 "data_offset": 2048, 00:19:42.684 "data_size": 63488 00:19:42.684 }, 00:19:42.684 { 00:19:42.684 "name": "pt4", 00:19:42.684 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:42.684 "is_configured": true, 00:19:42.684 "data_offset": 2048, 00:19:42.684 "data_size": 63488 00:19:42.684 } 00:19:42.684 ] 00:19:42.684 }' 00:19:42.684 01:02:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.684 01:02:17 -- common/autotest_common.sh@10 -- # set +x 00:19:43.252 01:02:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:43.252 01:02:17 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:43.511 [2024-11-18 01:02:17.892209] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.771 01:02:17 -- bdev/bdev_raid.sh@430 -- # '[' 71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc '!=' 71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc ']' 00:19:43.771 01:02:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:43.771 01:02:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:43.771 01:02:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:43.771 01:02:17 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:43.771 [2024-11-18 01:02:18.164100] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.030 "name": "raid_bdev1", 00:19:44.030 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:44.030 "strip_size_kb": 0, 00:19:44.030 "state": "online", 00:19:44.030 "raid_level": "raid1", 00:19:44.030 "superblock": true, 00:19:44.030 "num_base_bdevs": 4, 00:19:44.030 "num_base_bdevs_discovered": 3, 00:19:44.030 "num_base_bdevs_operational": 3, 00:19:44.030 "base_bdevs_list": [ 00:19:44.030 { 00:19:44.030 "name": null, 00:19:44.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.030 "is_configured": false, 00:19:44.030 "data_offset": 2048, 00:19:44.030 "data_size": 63488 00:19:44.030 }, 00:19:44.030 { 00:19:44.030 "name": "pt2", 00:19:44.030 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:44.030 "is_configured": true, 00:19:44.030 "data_offset": 2048, 00:19:44.030 "data_size": 63488 00:19:44.030 }, 00:19:44.030 { 00:19:44.030 "name": "pt3", 00:19:44.030 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:44.030 "is_configured": true, 00:19:44.030 "data_offset": 2048, 00:19:44.030 "data_size": 63488 00:19:44.030 }, 00:19:44.030 { 00:19:44.030 "name": "pt4", 00:19:44.030 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:44.030 "is_configured": true, 00:19:44.030 "data_offset": 2048, 00:19:44.030 "data_size": 63488 00:19:44.030 } 00:19:44.030 ] 00:19:44.030 }' 00:19:44.030 01:02:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.030 01:02:18 -- common/autotest_common.sh@10 -- # set +x 00:19:44.966 01:02:19 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:44.966 [2024-11-18 01:02:19.288240] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.966 [2024-11-18 01:02:19.288287] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.966 [2024-11-18 01:02:19.288373] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.966 [2024-11-18 01:02:19.288463] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.966 [2024-11-18 01:02:19.288472] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:44.966 01:02:19 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.966 01:02:19 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:45.224 01:02:19 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:45.224 01:02:19 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:45.224 01:02:19 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:45.224 01:02:19 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:45.224 01:02:19 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:45.483 01:02:19 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:45.483 01:02:19 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:45.483 01:02:19 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:45.741 01:02:20 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:45.741 01:02:20 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:45.741 01:02:20 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:46.000 01:02:20 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:46.000 01:02:20 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:46.000 01:02:20 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:46.000 01:02:20 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:46.000 01:02:20 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:46.258 [2024-11-18 01:02:20.420395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:46.258 [2024-11-18 01:02:20.420525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.258 [2024-11-18 01:02:20.420563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:46.258 [2024-11-18 01:02:20.420594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.258 [2024-11-18 01:02:20.423419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.258 [2024-11-18 01:02:20.423486] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:46.258 [2024-11-18 01:02:20.423603] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:46.258 [2024-11-18 01:02:20.423639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:46.258 pt2 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.258 "name": "raid_bdev1", 00:19:46.258 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:46.258 "strip_size_kb": 0, 00:19:46.258 "state": "configuring", 00:19:46.258 "raid_level": "raid1", 00:19:46.258 "superblock": true, 00:19:46.258 "num_base_bdevs": 4, 00:19:46.258 "num_base_bdevs_discovered": 1, 00:19:46.258 "num_base_bdevs_operational": 3, 00:19:46.258 "base_bdevs_list": [ 00:19:46.258 { 00:19:46.258 "name": null, 00:19:46.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.258 "is_configured": false, 00:19:46.258 "data_offset": 2048, 00:19:46.258 "data_size": 63488 00:19:46.258 }, 00:19:46.258 { 00:19:46.258 "name": "pt2", 00:19:46.258 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:46.258 "is_configured": true, 00:19:46.258 "data_offset": 2048, 00:19:46.258 "data_size": 63488 00:19:46.258 }, 00:19:46.258 { 00:19:46.258 "name": null, 00:19:46.258 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:46.258 "is_configured": false, 00:19:46.258 "data_offset": 2048, 00:19:46.258 "data_size": 63488 00:19:46.258 }, 00:19:46.258 { 00:19:46.258 "name": null, 00:19:46.258 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:46.258 "is_configured": false, 00:19:46.258 "data_offset": 2048, 00:19:46.258 "data_size": 63488 00:19:46.258 } 00:19:46.258 ] 00:19:46.258 }' 00:19:46.258 01:02:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.258 01:02:20 -- common/autotest_common.sh@10 -- # set +x 00:19:46.883 01:02:21 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:46.883 01:02:21 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:46.883 01:02:21 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:47.143 [2024-11-18 01:02:21.464638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:47.143 [2024-11-18 01:02:21.464763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.143 [2024-11-18 01:02:21.464813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:47.143 [2024-11-18 01:02:21.464836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.143 [2024-11-18 01:02:21.466007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.143 [2024-11-18 01:02:21.466063] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:47.143 [2024-11-18 01:02:21.466180] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:47.143 [2024-11-18 01:02:21.466207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:47.143 pt3 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.143 01:02:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.402 01:02:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.402 "name": "raid_bdev1", 00:19:47.402 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:47.402 "strip_size_kb": 0, 00:19:47.402 "state": "configuring", 00:19:47.402 "raid_level": "raid1", 00:19:47.402 "superblock": true, 00:19:47.402 "num_base_bdevs": 4, 00:19:47.402 "num_base_bdevs_discovered": 2, 00:19:47.402 "num_base_bdevs_operational": 3, 00:19:47.402 "base_bdevs_list": [ 00:19:47.402 { 00:19:47.402 "name": null, 00:19:47.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.402 "is_configured": false, 00:19:47.402 "data_offset": 2048, 00:19:47.402 "data_size": 63488 00:19:47.402 }, 00:19:47.402 { 00:19:47.402 "name": "pt2", 00:19:47.402 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:47.402 "is_configured": true, 00:19:47.402 "data_offset": 2048, 00:19:47.402 "data_size": 63488 00:19:47.402 }, 00:19:47.402 { 00:19:47.402 "name": "pt3", 00:19:47.402 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:47.402 "is_configured": true, 00:19:47.402 "data_offset": 2048, 00:19:47.402 "data_size": 63488 00:19:47.402 }, 00:19:47.402 { 00:19:47.402 "name": null, 00:19:47.402 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:47.402 "is_configured": false, 00:19:47.402 "data_offset": 2048, 00:19:47.402 "data_size": 63488 00:19:47.402 } 00:19:47.402 ] 00:19:47.402 }' 00:19:47.402 01:02:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.402 01:02:21 -- common/autotest_common.sh@10 -- # set +x 00:19:47.970 01:02:22 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:47.970 01:02:22 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:47.970 01:02:22 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:47.970 01:02:22 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:48.230 [2024-11-18 01:02:22.512801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:48.230 [2024-11-18 01:02:22.512927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.230 [2024-11-18 01:02:22.512975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:48.230 [2024-11-18 01:02:22.512998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.230 [2024-11-18 01:02:22.513497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.230 [2024-11-18 01:02:22.513540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:48.230 [2024-11-18 01:02:22.513638] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:48.230 [2024-11-18 01:02:22.513661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:48.230 [2024-11-18 01:02:22.513782] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:19:48.230 [2024-11-18 01:02:22.513791] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:48.230 [2024-11-18 01:02:22.513862] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:19:48.230 [2024-11-18 01:02:22.514193] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:19:48.230 [2024-11-18 01:02:22.514211] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:19:48.230 [2024-11-18 01:02:22.514316] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.230 pt4 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.230 01:02:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.489 01:02:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.489 "name": "raid_bdev1", 00:19:48.489 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:48.489 "strip_size_kb": 0, 00:19:48.489 "state": "online", 00:19:48.489 "raid_level": "raid1", 00:19:48.489 "superblock": true, 00:19:48.489 "num_base_bdevs": 4, 00:19:48.489 "num_base_bdevs_discovered": 3, 00:19:48.489 "num_base_bdevs_operational": 3, 00:19:48.489 "base_bdevs_list": [ 00:19:48.489 { 00:19:48.489 "name": null, 00:19:48.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.489 "is_configured": false, 00:19:48.489 "data_offset": 2048, 00:19:48.489 "data_size": 63488 00:19:48.489 }, 00:19:48.489 { 00:19:48.489 "name": "pt2", 00:19:48.489 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:48.489 "is_configured": true, 00:19:48.489 "data_offset": 2048, 00:19:48.489 "data_size": 63488 00:19:48.489 }, 00:19:48.489 { 00:19:48.489 "name": "pt3", 00:19:48.489 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:48.489 "is_configured": true, 00:19:48.489 "data_offset": 2048, 00:19:48.489 "data_size": 63488 00:19:48.489 }, 00:19:48.489 { 00:19:48.489 "name": "pt4", 00:19:48.489 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:48.489 "is_configured": true, 00:19:48.489 "data_offset": 2048, 00:19:48.489 "data_size": 63488 00:19:48.489 } 00:19:48.489 ] 00:19:48.489 }' 00:19:48.489 01:02:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.489 01:02:22 -- common/autotest_common.sh@10 -- # set +x 00:19:49.057 01:02:23 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:49.057 01:02:23 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:49.317 [2024-11-18 01:02:23.468960] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.317 [2024-11-18 01:02:23.469015] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.317 [2024-11-18 01:02:23.469099] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.317 [2024-11-18 01:02:23.469180] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.317 [2024-11-18 01:02:23.469190] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:19:49.317 01:02:23 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.317 01:02:23 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:49.317 01:02:23 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:49.317 01:02:23 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:49.317 01:02:23 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:49.577 [2024-11-18 01:02:23.860992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:49.577 [2024-11-18 01:02:23.861106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.577 [2024-11-18 01:02:23.861160] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:49.577 [2024-11-18 01:02:23.861184] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.577 [2024-11-18 01:02:23.864036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.577 [2024-11-18 01:02:23.864112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:49.577 [2024-11-18 01:02:23.864205] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:49.577 [2024-11-18 01:02:23.864248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:49.577 pt1 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.577 01:02:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.836 01:02:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.836 "name": "raid_bdev1", 00:19:49.836 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:49.836 "strip_size_kb": 0, 00:19:49.836 "state": "configuring", 00:19:49.836 "raid_level": "raid1", 00:19:49.836 "superblock": true, 00:19:49.836 "num_base_bdevs": 4, 00:19:49.836 "num_base_bdevs_discovered": 1, 00:19:49.836 "num_base_bdevs_operational": 4, 00:19:49.836 "base_bdevs_list": [ 00:19:49.836 { 00:19:49.836 "name": "pt1", 00:19:49.836 "uuid": "63dbc414-2baf-5b50-817f-1a751a5e2575", 00:19:49.836 "is_configured": true, 00:19:49.836 "data_offset": 2048, 00:19:49.836 "data_size": 63488 00:19:49.836 }, 00:19:49.836 { 00:19:49.836 "name": null, 00:19:49.836 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:49.836 "is_configured": false, 00:19:49.836 "data_offset": 2048, 00:19:49.836 "data_size": 63488 00:19:49.836 }, 00:19:49.836 { 00:19:49.836 "name": null, 00:19:49.836 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:49.836 "is_configured": false, 00:19:49.836 "data_offset": 2048, 00:19:49.836 "data_size": 63488 00:19:49.836 }, 00:19:49.836 { 00:19:49.836 "name": null, 00:19:49.836 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:49.836 "is_configured": false, 00:19:49.836 "data_offset": 2048, 00:19:49.836 "data_size": 63488 00:19:49.836 } 00:19:49.836 ] 00:19:49.836 }' 00:19:49.836 01:02:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.836 01:02:24 -- common/autotest_common.sh@10 -- # set +x 00:19:50.404 01:02:24 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:19:50.404 01:02:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:50.404 01:02:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:50.664 01:02:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:50.664 01:02:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:50.664 01:02:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:50.923 01:02:25 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:50.923 01:02:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:50.923 01:02:25 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:51.182 01:02:25 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:51.182 01:02:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:51.182 01:02:25 -- bdev/bdev_raid.sh@489 -- # i=3 00:19:51.182 01:02:25 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:51.182 [2024-11-18 01:02:25.558669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:51.182 [2024-11-18 01:02:25.558801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.182 [2024-11-18 01:02:25.558848] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:51.182 [2024-11-18 01:02:25.558880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.182 [2024-11-18 01:02:25.559407] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.182 [2024-11-18 01:02:25.559469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:51.182 [2024-11-18 01:02:25.559594] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:51.182 [2024-11-18 01:02:25.559608] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:51.182 [2024-11-18 01:02:25.559616] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.182 [2024-11-18 01:02:25.559654] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:19:51.182 [2024-11-18 01:02:25.559731] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:51.182 pt4 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.442 "name": "raid_bdev1", 00:19:51.442 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:51.442 "strip_size_kb": 0, 00:19:51.442 "state": "configuring", 00:19:51.442 "raid_level": "raid1", 00:19:51.442 "superblock": true, 00:19:51.442 "num_base_bdevs": 4, 00:19:51.442 "num_base_bdevs_discovered": 1, 00:19:51.442 "num_base_bdevs_operational": 3, 00:19:51.442 "base_bdevs_list": [ 00:19:51.442 { 00:19:51.442 "name": null, 00:19:51.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.442 "is_configured": false, 00:19:51.442 "data_offset": 2048, 00:19:51.442 "data_size": 63488 00:19:51.442 }, 00:19:51.442 { 00:19:51.442 "name": null, 00:19:51.442 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:51.442 "is_configured": false, 00:19:51.442 "data_offset": 2048, 00:19:51.442 "data_size": 63488 00:19:51.442 }, 00:19:51.442 { 00:19:51.442 "name": null, 00:19:51.442 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:51.442 "is_configured": false, 00:19:51.442 "data_offset": 2048, 00:19:51.442 "data_size": 63488 00:19:51.442 }, 00:19:51.442 { 00:19:51.442 "name": "pt4", 00:19:51.442 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:51.442 "is_configured": true, 00:19:51.442 "data_offset": 2048, 00:19:51.442 "data_size": 63488 00:19:51.442 } 00:19:51.442 ] 00:19:51.442 }' 00:19:51.442 01:02:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.442 01:02:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.011 01:02:26 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:19:52.011 01:02:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:52.011 01:02:26 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:52.271 [2024-11-18 01:02:26.542754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:52.271 [2024-11-18 01:02:26.542901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.271 [2024-11-18 01:02:26.542960] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:52.271 [2024-11-18 01:02:26.543001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.271 [2024-11-18 01:02:26.543587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.271 [2024-11-18 01:02:26.543668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:52.271 [2024-11-18 01:02:26.543811] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:52.271 [2024-11-18 01:02:26.543855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:52.271 pt2 00:19:52.271 01:02:26 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:52.271 01:02:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:52.271 01:02:26 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:52.531 [2024-11-18 01:02:26.790767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:52.531 [2024-11-18 01:02:26.790921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.531 [2024-11-18 01:02:26.790986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:52.531 [2024-11-18 01:02:26.791034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.531 [2024-11-18 01:02:26.791596] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.531 [2024-11-18 01:02:26.791681] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:52.531 [2024-11-18 01:02:26.791835] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:52.531 [2024-11-18 01:02:26.791887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:52.531 [2024-11-18 01:02:26.792071] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:19:52.531 [2024-11-18 01:02:26.792089] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:52.531 [2024-11-18 01:02:26.792211] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:19:52.531 [2024-11-18 01:02:26.792602] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:19:52.531 [2024-11-18 01:02:26.792624] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:19:52.531 [2024-11-18 01:02:26.792794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.531 pt3 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.531 01:02:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.790 01:02:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.790 "name": "raid_bdev1", 00:19:52.790 "uuid": "71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc", 00:19:52.790 "strip_size_kb": 0, 00:19:52.790 "state": "online", 00:19:52.790 "raid_level": "raid1", 00:19:52.790 "superblock": true, 00:19:52.790 "num_base_bdevs": 4, 00:19:52.790 "num_base_bdevs_discovered": 3, 00:19:52.790 "num_base_bdevs_operational": 3, 00:19:52.790 "base_bdevs_list": [ 00:19:52.790 { 00:19:52.790 "name": null, 00:19:52.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.790 "is_configured": false, 00:19:52.790 "data_offset": 2048, 00:19:52.790 "data_size": 63488 00:19:52.790 }, 00:19:52.790 { 00:19:52.790 "name": "pt2", 00:19:52.790 "uuid": "6bb9d402-cfb0-590f-809d-91ed25446948", 00:19:52.790 "is_configured": true, 00:19:52.790 "data_offset": 2048, 00:19:52.790 "data_size": 63488 00:19:52.790 }, 00:19:52.790 { 00:19:52.790 "name": "pt3", 00:19:52.790 "uuid": "13bb83f0-029d-5391-aef6-d9dfb6bec0a9", 00:19:52.790 "is_configured": true, 00:19:52.790 "data_offset": 2048, 00:19:52.790 "data_size": 63488 00:19:52.790 }, 00:19:52.790 { 00:19:52.790 "name": "pt4", 00:19:52.790 "uuid": "28d1c1d7-0bf1-5bb7-8fb9-ad601e9bf278", 00:19:52.790 "is_configured": true, 00:19:52.790 "data_offset": 2048, 00:19:52.790 "data_size": 63488 00:19:52.790 } 00:19:52.790 ] 00:19:52.790 }' 00:19:52.790 01:02:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.790 01:02:27 -- common/autotest_common.sh@10 -- # set +x 00:19:53.359 01:02:27 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:53.359 01:02:27 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:53.618 [2024-11-18 01:02:27.779030] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.618 01:02:27 -- bdev/bdev_raid.sh@506 -- # '[' 71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc '!=' 71f7f82a-6aa3-4d3c-86cd-5e5df563d3bc ']' 00:19:53.618 01:02:27 -- bdev/bdev_raid.sh@511 -- # killprocess 132394 00:19:53.618 01:02:27 -- common/autotest_common.sh@936 -- # '[' -z 132394 ']' 00:19:53.618 01:02:27 -- common/autotest_common.sh@940 -- # kill -0 132394 00:19:53.618 01:02:27 -- common/autotest_common.sh@941 -- # uname 00:19:53.618 01:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.618 01:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132394 00:19:53.618 killing process with pid 132394 00:19:53.618 01:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:53.618 01:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:53.618 01:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132394' 00:19:53.618 01:02:27 -- common/autotest_common.sh@955 -- # kill 132394 00:19:53.618 01:02:27 -- common/autotest_common.sh@960 -- # wait 132394 00:19:53.618 [2024-11-18 01:02:27.837944] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.618 [2024-11-18 01:02:27.838066] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.618 [2024-11-18 01:02:27.838166] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.618 [2024-11-18 01:02:27.838177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:19:53.618 [2024-11-18 01:02:27.924065] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:54.187 00:19:54.187 real 0m20.438s 00:19:54.187 user 0m36.974s 00:19:54.187 sys 0m3.583s 00:19:54.187 01:02:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:54.187 01:02:28 -- common/autotest_common.sh@10 -- # set +x 00:19:54.187 ************************************ 00:19:54.187 END TEST raid_superblock_test 00:19:54.187 ************************************ 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:54.187 01:02:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:54.187 01:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:54.187 01:02:28 -- common/autotest_common.sh@10 -- # set +x 00:19:54.187 ************************************ 00:19:54.187 START TEST raid_rebuild_test 00:19:54.187 ************************************ 00:19:54.187 01:02:28 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@544 -- # raid_pid=133050 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:54.187 01:02:28 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133050 /var/tmp/spdk-raid.sock 00:19:54.187 01:02:28 -- common/autotest_common.sh@829 -- # '[' -z 133050 ']' 00:19:54.187 01:02:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:54.187 01:02:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.187 01:02:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:54.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:54.187 01:02:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.187 01:02:28 -- common/autotest_common.sh@10 -- # set +x 00:19:54.187 [2024-11-18 01:02:28.478780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:54.187 [2024-11-18 01:02:28.479346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133050 ] 00:19:54.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:54.187 Zero copy mechanism will not be used. 00:19:54.446 [2024-11-18 01:02:28.629350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.446 [2024-11-18 01:02:28.709975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.446 [2024-11-18 01:02:28.789198] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.015 01:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.015 01:02:29 -- common/autotest_common.sh@862 -- # return 0 00:19:55.015 01:02:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:55.015 01:02:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:55.015 01:02:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:55.273 BaseBdev1 00:19:55.273 01:02:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:55.273 01:02:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:55.273 01:02:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:55.533 BaseBdev2 00:19:55.533 01:02:29 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:55.792 spare_malloc 00:19:55.792 01:02:30 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:56.052 spare_delay 00:19:56.052 01:02:30 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:56.311 [2024-11-18 01:02:30.578667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:56.311 [2024-11-18 01:02:30.579035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.311 [2024-11-18 01:02:30.579121] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:56.311 [2024-11-18 01:02:30.579312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.311 [2024-11-18 01:02:30.582449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.311 [2024-11-18 01:02:30.582735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:56.311 spare 00:19:56.311 01:02:30 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:56.571 [2024-11-18 01:02:30.847287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.571 [2024-11-18 01:02:30.850055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.571 [2024-11-18 01:02:30.850327] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:56.571 [2024-11-18 01:02:30.850374] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:56.571 [2024-11-18 01:02:30.850690] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:19:56.571 [2024-11-18 01:02:30.851218] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:56.571 [2024-11-18 01:02:30.851324] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:19:56.571 [2024-11-18 01:02:30.851683] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.571 01:02:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.830 01:02:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.830 "name": "raid_bdev1", 00:19:56.830 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:19:56.830 "strip_size_kb": 0, 00:19:56.830 "state": "online", 00:19:56.830 "raid_level": "raid1", 00:19:56.830 "superblock": false, 00:19:56.830 "num_base_bdevs": 2, 00:19:56.830 "num_base_bdevs_discovered": 2, 00:19:56.830 "num_base_bdevs_operational": 2, 00:19:56.830 "base_bdevs_list": [ 00:19:56.830 { 00:19:56.830 "name": "BaseBdev1", 00:19:56.830 "uuid": "082c7967-abe0-4713-a6f3-419fe2d68862", 00:19:56.830 "is_configured": true, 00:19:56.830 "data_offset": 0, 00:19:56.830 "data_size": 65536 00:19:56.830 }, 00:19:56.830 { 00:19:56.830 "name": "BaseBdev2", 00:19:56.830 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:19:56.830 "is_configured": true, 00:19:56.830 "data_offset": 0, 00:19:56.830 "data_size": 65536 00:19:56.830 } 00:19:56.830 ] 00:19:56.830 }' 00:19:56.830 01:02:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.830 01:02:31 -- common/autotest_common.sh@10 -- # set +x 00:19:57.399 01:02:31 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:57.399 01:02:31 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:57.658 [2024-11-18 01:02:31.848400] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.658 01:02:31 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:57.658 01:02:31 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.658 01:02:31 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:57.917 01:02:32 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:57.917 01:02:32 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:57.917 01:02:32 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:57.917 01:02:32 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@12 -- # local i 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.917 01:02:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:58.249 [2024-11-18 01:02:32.388157] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:58.249 /dev/nbd0 00:19:58.249 01:02:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.249 01:02:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.249 01:02:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:58.249 01:02:32 -- common/autotest_common.sh@867 -- # local i 00:19:58.249 01:02:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:58.249 01:02:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:58.249 01:02:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:58.249 01:02:32 -- common/autotest_common.sh@871 -- # break 00:19:58.249 01:02:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:58.249 01:02:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:58.249 01:02:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.249 1+0 records in 00:19:58.249 1+0 records out 00:19:58.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100626 s, 4.1 MB/s 00:19:58.249 01:02:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.249 01:02:32 -- common/autotest_common.sh@884 -- # size=4096 00:19:58.249 01:02:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.249 01:02:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:58.249 01:02:32 -- common/autotest_common.sh@887 -- # return 0 00:19:58.249 01:02:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.249 01:02:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.249 01:02:32 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:58.249 01:02:32 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:58.249 01:02:32 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:02.457 65536+0 records in 00:20:02.457 65536+0 records out 00:20:02.457 33554432 bytes (34 MB, 32 MiB) copied, 3.92078 s, 8.6 MB/s 00:20:02.457 01:02:36 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@51 -- # local i 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:02.457 [2024-11-18 01:02:36.644135] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@41 -- # break 00:20:02.457 01:02:36 -- bdev/nbd_common.sh@45 -- # return 0 00:20:02.457 01:02:36 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:02.715 [2024-11-18 01:02:36.891778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:02.715 01:02:36 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.715 01:02:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.716 01:02:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.974 01:02:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.974 "name": "raid_bdev1", 00:20:02.974 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:02.974 "strip_size_kb": 0, 00:20:02.974 "state": "online", 00:20:02.974 "raid_level": "raid1", 00:20:02.974 "superblock": false, 00:20:02.974 "num_base_bdevs": 2, 00:20:02.974 "num_base_bdevs_discovered": 1, 00:20:02.974 "num_base_bdevs_operational": 1, 00:20:02.974 "base_bdevs_list": [ 00:20:02.974 { 00:20:02.974 "name": null, 00:20:02.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.974 "is_configured": false, 00:20:02.974 "data_offset": 0, 00:20:02.974 "data_size": 65536 00:20:02.974 }, 00:20:02.974 { 00:20:02.974 "name": "BaseBdev2", 00:20:02.974 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:02.974 "is_configured": true, 00:20:02.974 "data_offset": 0, 00:20:02.974 "data_size": 65536 00:20:02.974 } 00:20:02.974 ] 00:20:02.974 }' 00:20:02.974 01:02:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.974 01:02:37 -- common/autotest_common.sh@10 -- # set +x 00:20:03.543 01:02:37 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:03.543 [2024-11-18 01:02:37.931955] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:03.543 [2024-11-18 01:02:37.932301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.543 [2024-11-18 01:02:37.940102] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d05ee0 00:20:03.543 [2024-11-18 01:02:37.942788] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.802 01:02:37 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:04.740 01:02:38 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.740 01:02:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:04.740 01:02:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:04.740 01:02:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:04.740 01:02:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:04.740 01:02:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.740 01:02:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.001 01:02:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:05.001 "name": "raid_bdev1", 00:20:05.001 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:05.001 "strip_size_kb": 0, 00:20:05.001 "state": "online", 00:20:05.001 "raid_level": "raid1", 00:20:05.001 "superblock": false, 00:20:05.001 "num_base_bdevs": 2, 00:20:05.001 "num_base_bdevs_discovered": 2, 00:20:05.001 "num_base_bdevs_operational": 2, 00:20:05.001 "process": { 00:20:05.001 "type": "rebuild", 00:20:05.001 "target": "spare", 00:20:05.001 "progress": { 00:20:05.001 "blocks": 24576, 00:20:05.001 "percent": 37 00:20:05.001 } 00:20:05.001 }, 00:20:05.001 "base_bdevs_list": [ 00:20:05.001 { 00:20:05.001 "name": "spare", 00:20:05.001 "uuid": "8f5a76ba-8f44-5d7d-90cf-7f183e420ca6", 00:20:05.001 "is_configured": true, 00:20:05.001 "data_offset": 0, 00:20:05.001 "data_size": 65536 00:20:05.001 }, 00:20:05.001 { 00:20:05.001 "name": "BaseBdev2", 00:20:05.001 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:05.001 "is_configured": true, 00:20:05.001 "data_offset": 0, 00:20:05.001 "data_size": 65536 00:20:05.001 } 00:20:05.001 ] 00:20:05.001 }' 00:20:05.001 01:02:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:05.001 01:02:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.001 01:02:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:05.001 01:02:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.001 01:02:39 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:05.260 [2024-11-18 01:02:39.544812] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.260 [2024-11-18 01:02:39.555682] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:05.260 [2024-11-18 01:02:39.555955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.260 01:02:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.519 01:02:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.519 "name": "raid_bdev1", 00:20:05.519 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:05.519 "strip_size_kb": 0, 00:20:05.519 "state": "online", 00:20:05.519 "raid_level": "raid1", 00:20:05.519 "superblock": false, 00:20:05.519 "num_base_bdevs": 2, 00:20:05.519 "num_base_bdevs_discovered": 1, 00:20:05.519 "num_base_bdevs_operational": 1, 00:20:05.519 "base_bdevs_list": [ 00:20:05.519 { 00:20:05.519 "name": null, 00:20:05.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.519 "is_configured": false, 00:20:05.519 "data_offset": 0, 00:20:05.519 "data_size": 65536 00:20:05.519 }, 00:20:05.519 { 00:20:05.519 "name": "BaseBdev2", 00:20:05.519 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:05.519 "is_configured": true, 00:20:05.519 "data_offset": 0, 00:20:05.519 "data_size": 65536 00:20:05.519 } 00:20:05.519 ] 00:20:05.519 }' 00:20:05.519 01:02:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.519 01:02:39 -- common/autotest_common.sh@10 -- # set +x 00:20:06.087 01:02:40 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.087 01:02:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:06.087 01:02:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:06.087 01:02:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:06.087 01:02:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:06.087 01:02:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.087 01:02:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.347 01:02:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:06.347 "name": "raid_bdev1", 00:20:06.347 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:06.347 "strip_size_kb": 0, 00:20:06.347 "state": "online", 00:20:06.347 "raid_level": "raid1", 00:20:06.347 "superblock": false, 00:20:06.347 "num_base_bdevs": 2, 00:20:06.347 "num_base_bdevs_discovered": 1, 00:20:06.347 "num_base_bdevs_operational": 1, 00:20:06.347 "base_bdevs_list": [ 00:20:06.347 { 00:20:06.347 "name": null, 00:20:06.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.347 "is_configured": false, 00:20:06.347 "data_offset": 0, 00:20:06.347 "data_size": 65536 00:20:06.347 }, 00:20:06.347 { 00:20:06.347 "name": "BaseBdev2", 00:20:06.347 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:06.347 "is_configured": true, 00:20:06.347 "data_offset": 0, 00:20:06.347 "data_size": 65536 00:20:06.347 } 00:20:06.347 ] 00:20:06.347 }' 00:20:06.347 01:02:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:06.606 01:02:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:06.606 01:02:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:06.606 01:02:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:06.606 01:02:40 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:06.606 [2024-11-18 01:02:40.992230] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:06.606 [2024-11-18 01:02:40.992577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:06.606 [2024-11-18 01:02:41.000338] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:20:06.606 [2024-11-18 01:02:41.002928] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:06.866 01:02:41 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:07.803 01:02:42 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.803 01:02:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:07.803 01:02:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:07.803 01:02:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:07.803 01:02:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:07.803 01:02:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.803 01:02:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.063 "name": "raid_bdev1", 00:20:08.063 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:08.063 "strip_size_kb": 0, 00:20:08.063 "state": "online", 00:20:08.063 "raid_level": "raid1", 00:20:08.063 "superblock": false, 00:20:08.063 "num_base_bdevs": 2, 00:20:08.063 "num_base_bdevs_discovered": 2, 00:20:08.063 "num_base_bdevs_operational": 2, 00:20:08.063 "process": { 00:20:08.063 "type": "rebuild", 00:20:08.063 "target": "spare", 00:20:08.063 "progress": { 00:20:08.063 "blocks": 24576, 00:20:08.063 "percent": 37 00:20:08.063 } 00:20:08.063 }, 00:20:08.063 "base_bdevs_list": [ 00:20:08.063 { 00:20:08.063 "name": "spare", 00:20:08.063 "uuid": "8f5a76ba-8f44-5d7d-90cf-7f183e420ca6", 00:20:08.063 "is_configured": true, 00:20:08.063 "data_offset": 0, 00:20:08.063 "data_size": 65536 00:20:08.063 }, 00:20:08.063 { 00:20:08.063 "name": "BaseBdev2", 00:20:08.063 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:08.063 "is_configured": true, 00:20:08.063 "data_offset": 0, 00:20:08.063 "data_size": 65536 00:20:08.063 } 00:20:08.063 ] 00:20:08.063 }' 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@657 -- # local timeout=365 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.063 01:02:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.322 01:02:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.322 "name": "raid_bdev1", 00:20:08.322 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:08.322 "strip_size_kb": 0, 00:20:08.323 "state": "online", 00:20:08.323 "raid_level": "raid1", 00:20:08.323 "superblock": false, 00:20:08.323 "num_base_bdevs": 2, 00:20:08.323 "num_base_bdevs_discovered": 2, 00:20:08.323 "num_base_bdevs_operational": 2, 00:20:08.323 "process": { 00:20:08.323 "type": "rebuild", 00:20:08.323 "target": "spare", 00:20:08.323 "progress": { 00:20:08.323 "blocks": 30720, 00:20:08.323 "percent": 46 00:20:08.323 } 00:20:08.323 }, 00:20:08.323 "base_bdevs_list": [ 00:20:08.323 { 00:20:08.323 "name": "spare", 00:20:08.323 "uuid": "8f5a76ba-8f44-5d7d-90cf-7f183e420ca6", 00:20:08.323 "is_configured": true, 00:20:08.323 "data_offset": 0, 00:20:08.323 "data_size": 65536 00:20:08.323 }, 00:20:08.323 { 00:20:08.323 "name": "BaseBdev2", 00:20:08.323 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:08.323 "is_configured": true, 00:20:08.323 "data_offset": 0, 00:20:08.323 "data_size": 65536 00:20:08.323 } 00:20:08.323 ] 00:20:08.323 }' 00:20:08.323 01:02:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.323 01:02:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.323 01:02:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.323 01:02:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.323 01:02:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:09.702 "name": "raid_bdev1", 00:20:09.702 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:09.702 "strip_size_kb": 0, 00:20:09.702 "state": "online", 00:20:09.702 "raid_level": "raid1", 00:20:09.702 "superblock": false, 00:20:09.702 "num_base_bdevs": 2, 00:20:09.702 "num_base_bdevs_discovered": 2, 00:20:09.702 "num_base_bdevs_operational": 2, 00:20:09.702 "process": { 00:20:09.702 "type": "rebuild", 00:20:09.702 "target": "spare", 00:20:09.702 "progress": { 00:20:09.702 "blocks": 57344, 00:20:09.702 "percent": 87 00:20:09.702 } 00:20:09.702 }, 00:20:09.702 "base_bdevs_list": [ 00:20:09.702 { 00:20:09.702 "name": "spare", 00:20:09.702 "uuid": "8f5a76ba-8f44-5d7d-90cf-7f183e420ca6", 00:20:09.702 "is_configured": true, 00:20:09.702 "data_offset": 0, 00:20:09.702 "data_size": 65536 00:20:09.702 }, 00:20:09.702 { 00:20:09.702 "name": "BaseBdev2", 00:20:09.702 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:09.702 "is_configured": true, 00:20:09.702 "data_offset": 0, 00:20:09.702 "data_size": 65536 00:20:09.702 } 00:20:09.702 ] 00:20:09.702 }' 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.702 01:02:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:09.702 01:02:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.702 01:02:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:09.962 [2024-11-18 01:02:44.226722] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:09.962 [2024-11-18 01:02:44.227117] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:09.962 [2024-11-18 01:02:44.227305] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.900 01:02:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:11.161 "name": "raid_bdev1", 00:20:11.161 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:11.161 "strip_size_kb": 0, 00:20:11.161 "state": "online", 00:20:11.161 "raid_level": "raid1", 00:20:11.161 "superblock": false, 00:20:11.161 "num_base_bdevs": 2, 00:20:11.161 "num_base_bdevs_discovered": 2, 00:20:11.161 "num_base_bdevs_operational": 2, 00:20:11.161 "base_bdevs_list": [ 00:20:11.161 { 00:20:11.161 "name": "spare", 00:20:11.161 "uuid": "8f5a76ba-8f44-5d7d-90cf-7f183e420ca6", 00:20:11.161 "is_configured": true, 00:20:11.161 "data_offset": 0, 00:20:11.161 "data_size": 65536 00:20:11.161 }, 00:20:11.161 { 00:20:11.161 "name": "BaseBdev2", 00:20:11.161 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:11.161 "is_configured": true, 00:20:11.161 "data_offset": 0, 00:20:11.161 "data_size": 65536 00:20:11.161 } 00:20:11.161 ] 00:20:11.161 }' 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@660 -- # break 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.161 01:02:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.420 01:02:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:11.420 "name": "raid_bdev1", 00:20:11.420 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:11.420 "strip_size_kb": 0, 00:20:11.420 "state": "online", 00:20:11.420 "raid_level": "raid1", 00:20:11.420 "superblock": false, 00:20:11.420 "num_base_bdevs": 2, 00:20:11.420 "num_base_bdevs_discovered": 2, 00:20:11.420 "num_base_bdevs_operational": 2, 00:20:11.420 "base_bdevs_list": [ 00:20:11.420 { 00:20:11.420 "name": "spare", 00:20:11.420 "uuid": "8f5a76ba-8f44-5d7d-90cf-7f183e420ca6", 00:20:11.420 "is_configured": true, 00:20:11.420 "data_offset": 0, 00:20:11.420 "data_size": 65536 00:20:11.420 }, 00:20:11.420 { 00:20:11.420 "name": "BaseBdev2", 00:20:11.420 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:11.420 "is_configured": true, 00:20:11.420 "data_offset": 0, 00:20:11.420 "data_size": 65536 00:20:11.420 } 00:20:11.420 ] 00:20:11.420 }' 00:20:11.420 01:02:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:11.420 01:02:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:11.420 01:02:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:11.420 01:02:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.421 01:02:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.680 01:02:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.680 "name": "raid_bdev1", 00:20:11.680 "uuid": "dc0dc0cd-ba3c-424d-aa1f-a09269b846e1", 00:20:11.680 "strip_size_kb": 0, 00:20:11.680 "state": "online", 00:20:11.680 "raid_level": "raid1", 00:20:11.680 "superblock": false, 00:20:11.680 "num_base_bdevs": 2, 00:20:11.680 "num_base_bdevs_discovered": 2, 00:20:11.680 "num_base_bdevs_operational": 2, 00:20:11.680 "base_bdevs_list": [ 00:20:11.680 { 00:20:11.680 "name": "spare", 00:20:11.680 "uuid": "8f5a76ba-8f44-5d7d-90cf-7f183e420ca6", 00:20:11.680 "is_configured": true, 00:20:11.680 "data_offset": 0, 00:20:11.680 "data_size": 65536 00:20:11.680 }, 00:20:11.680 { 00:20:11.680 "name": "BaseBdev2", 00:20:11.680 "uuid": "9c46473b-a513-4571-bf2c-6031fe23466e", 00:20:11.680 "is_configured": true, 00:20:11.680 "data_offset": 0, 00:20:11.680 "data_size": 65536 00:20:11.680 } 00:20:11.681 ] 00:20:11.681 }' 00:20:11.681 01:02:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.681 01:02:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.249 01:02:46 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:12.509 [2024-11-18 01:02:46.774854] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:12.509 [2024-11-18 01:02:46.775155] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.509 [2024-11-18 01:02:46.775417] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.509 [2024-11-18 01:02:46.775633] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.509 [2024-11-18 01:02:46.775736] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:20:12.509 01:02:46 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.509 01:02:46 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:12.768 01:02:46 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:12.768 01:02:46 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:12.768 01:02:46 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@12 -- # local i 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:12.768 01:02:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:13.027 /dev/nbd0 00:20:13.027 01:02:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:13.027 01:02:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:13.027 01:02:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:13.027 01:02:47 -- common/autotest_common.sh@867 -- # local i 00:20:13.027 01:02:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:13.027 01:02:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:13.027 01:02:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:13.027 01:02:47 -- common/autotest_common.sh@871 -- # break 00:20:13.027 01:02:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:13.027 01:02:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:13.027 01:02:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.027 1+0 records in 00:20:13.027 1+0 records out 00:20:13.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000929601 s, 4.4 MB/s 00:20:13.027 01:02:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.027 01:02:47 -- common/autotest_common.sh@884 -- # size=4096 00:20:13.027 01:02:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.027 01:02:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:13.027 01:02:47 -- common/autotest_common.sh@887 -- # return 0 00:20:13.027 01:02:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:13.027 01:02:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:13.027 01:02:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:13.286 /dev/nbd1 00:20:13.286 01:02:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:13.286 01:02:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:13.286 01:02:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:13.286 01:02:47 -- common/autotest_common.sh@867 -- # local i 00:20:13.286 01:02:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:13.286 01:02:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:13.286 01:02:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:13.286 01:02:47 -- common/autotest_common.sh@871 -- # break 00:20:13.286 01:02:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:13.287 01:02:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:13.287 01:02:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.287 1+0 records in 00:20:13.287 1+0 records out 00:20:13.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000913145 s, 4.5 MB/s 00:20:13.287 01:02:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.287 01:02:47 -- common/autotest_common.sh@884 -- # size=4096 00:20:13.287 01:02:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.287 01:02:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:13.287 01:02:47 -- common/autotest_common.sh@887 -- # return 0 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:13.287 01:02:47 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:13.287 01:02:47 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@51 -- # local i 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.287 01:02:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@41 -- # break 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.586 01:02:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@41 -- # break 00:20:13.877 01:02:48 -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.877 01:02:48 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:13.877 01:02:48 -- bdev/bdev_raid.sh@709 -- # killprocess 133050 00:20:13.877 01:02:48 -- common/autotest_common.sh@936 -- # '[' -z 133050 ']' 00:20:13.877 01:02:48 -- common/autotest_common.sh@940 -- # kill -0 133050 00:20:13.877 01:02:48 -- common/autotest_common.sh@941 -- # uname 00:20:13.877 01:02:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:13.877 01:02:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133050 00:20:13.877 01:02:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:13.877 01:02:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:13.877 01:02:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133050' 00:20:13.877 killing process with pid 133050 00:20:13.877 01:02:48 -- common/autotest_common.sh@955 -- # kill 133050 00:20:13.877 Received shutdown signal, test time was about 60.000000 seconds 00:20:13.877 00:20:13.877 Latency(us) 00:20:13.877 [2024-11-18T01:02:48.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.877 [2024-11-18T01:02:48.276Z] =================================================================================================================== 00:20:13.877 [2024-11-18T01:02:48.276Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.136 01:02:48 -- common/autotest_common.sh@960 -- # wait 133050 00:20:14.136 [2024-11-18 01:02:48.279564] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.136 [2024-11-18 01:02:48.336432] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:14.396 01:02:48 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:14.396 00:20:14.396 real 0m20.356s 00:20:14.396 user 0m27.637s 00:20:14.396 sys 0m4.691s 00:20:14.396 01:02:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:14.396 01:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 ************************************ 00:20:14.396 END TEST raid_rebuild_test 00:20:14.396 ************************************ 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:14.656 01:02:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:14.656 01:02:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:14.656 01:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.656 ************************************ 00:20:14.656 START TEST raid_rebuild_test_sb 00:20:14.656 ************************************ 00:20:14.656 01:02:48 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=133576 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133576 /var/tmp/spdk-raid.sock 00:20:14.656 01:02:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:14.656 01:02:48 -- common/autotest_common.sh@829 -- # '[' -z 133576 ']' 00:20:14.656 01:02:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:14.656 01:02:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:14.656 01:02:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:14.656 01:02:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.656 01:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.656 [2024-11-18 01:02:48.907148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:14.656 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:14.656 Zero copy mechanism will not be used. 00:20:14.656 [2024-11-18 01:02:48.907404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133576 ] 00:20:14.915 [2024-11-18 01:02:49.061482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.915 [2024-11-18 01:02:49.141063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.915 [2024-11-18 01:02:49.219785] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.484 01:02:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.484 01:02:49 -- common/autotest_common.sh@862 -- # return 0 00:20:15.484 01:02:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:15.484 01:02:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:15.484 01:02:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:15.743 BaseBdev1_malloc 00:20:15.743 01:02:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:16.002 [2024-11-18 01:02:50.277065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:16.002 [2024-11-18 01:02:50.277210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.002 [2024-11-18 01:02:50.277266] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:16.002 [2024-11-18 01:02:50.277320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.002 [2024-11-18 01:02:50.280274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.002 [2024-11-18 01:02:50.280342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:16.002 BaseBdev1 00:20:16.002 01:02:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:16.002 01:02:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:16.002 01:02:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:16.261 BaseBdev2_malloc 00:20:16.261 01:02:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:16.519 [2024-11-18 01:02:50.680907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:16.519 [2024-11-18 01:02:50.681027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.519 [2024-11-18 01:02:50.681073] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:16.519 [2024-11-18 01:02:50.681122] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.519 [2024-11-18 01:02:50.683869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.519 [2024-11-18 01:02:50.683929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:16.519 BaseBdev2 00:20:16.519 01:02:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:16.519 spare_malloc 00:20:16.519 01:02:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:16.778 spare_delay 00:20:16.778 01:02:51 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:17.037 [2024-11-18 01:02:51.278401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:17.037 [2024-11-18 01:02:51.278511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.037 [2024-11-18 01:02:51.278555] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:17.037 [2024-11-18 01:02:51.278602] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.037 [2024-11-18 01:02:51.281528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.037 [2024-11-18 01:02:51.281592] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:17.037 spare 00:20:17.037 01:02:51 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:17.295 [2024-11-18 01:02:51.458938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.295 [2024-11-18 01:02:51.461438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.295 [2024-11-18 01:02:51.461692] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:20:17.295 [2024-11-18 01:02:51.461704] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:17.295 [2024-11-18 01:02:51.461898] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:17.295 [2024-11-18 01:02:51.462366] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:20:17.295 [2024-11-18 01:02:51.462385] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:20:17.295 [2024-11-18 01:02:51.462543] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.295 01:02:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.296 "name": "raid_bdev1", 00:20:17.296 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:17.296 "strip_size_kb": 0, 00:20:17.296 "state": "online", 00:20:17.296 "raid_level": "raid1", 00:20:17.296 "superblock": true, 00:20:17.296 "num_base_bdevs": 2, 00:20:17.296 "num_base_bdevs_discovered": 2, 00:20:17.296 "num_base_bdevs_operational": 2, 00:20:17.296 "base_bdevs_list": [ 00:20:17.296 { 00:20:17.296 "name": "BaseBdev1", 00:20:17.296 "uuid": "56e58ac6-c303-5d73-bee8-22f80ee499e4", 00:20:17.296 "is_configured": true, 00:20:17.296 "data_offset": 2048, 00:20:17.296 "data_size": 63488 00:20:17.296 }, 00:20:17.296 { 00:20:17.296 "name": "BaseBdev2", 00:20:17.296 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:17.296 "is_configured": true, 00:20:17.296 "data_offset": 2048, 00:20:17.296 "data_size": 63488 00:20:17.296 } 00:20:17.296 ] 00:20:17.296 }' 00:20:17.296 01:02:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.296 01:02:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.232 01:02:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:18.232 01:02:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:18.232 [2024-11-18 01:02:52.567183] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.232 01:02:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:18.232 01:02:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.232 01:02:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:18.491 01:02:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:18.491 01:02:52 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:18.491 01:02:52 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:18.491 01:02:52 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@12 -- # local i 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.491 01:02:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:18.750 [2024-11-18 01:02:53.023164] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:18.750 /dev/nbd0 00:20:18.750 01:02:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:18.750 01:02:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:18.750 01:02:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:18.750 01:02:53 -- common/autotest_common.sh@867 -- # local i 00:20:18.750 01:02:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:18.750 01:02:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:18.750 01:02:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:18.750 01:02:53 -- common/autotest_common.sh@871 -- # break 00:20:18.750 01:02:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:18.750 01:02:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:18.750 01:02:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.750 1+0 records in 00:20:18.750 1+0 records out 00:20:18.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060116 s, 6.8 MB/s 00:20:18.750 01:02:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.750 01:02:53 -- common/autotest_common.sh@884 -- # size=4096 00:20:18.750 01:02:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.750 01:02:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:18.750 01:02:53 -- common/autotest_common.sh@887 -- # return 0 00:20:18.750 01:02:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.750 01:02:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.750 01:02:53 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:18.750 01:02:53 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:18.750 01:02:53 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:24.019 63488+0 records in 00:20:24.019 63488+0 records out 00:20:24.019 32505856 bytes (33 MB, 31 MiB) copied, 4.52432 s, 7.2 MB/s 00:20:24.019 01:02:57 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@51 -- # local i 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.019 [2024-11-18 01:02:57.922958] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@41 -- # break 00:20:24.019 01:02:57 -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.019 01:02:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:24.019 [2024-11-18 01:02:58.138626] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.019 01:02:58 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.020 "name": "raid_bdev1", 00:20:24.020 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:24.020 "strip_size_kb": 0, 00:20:24.020 "state": "online", 00:20:24.020 "raid_level": "raid1", 00:20:24.020 "superblock": true, 00:20:24.020 "num_base_bdevs": 2, 00:20:24.020 "num_base_bdevs_discovered": 1, 00:20:24.020 "num_base_bdevs_operational": 1, 00:20:24.020 "base_bdevs_list": [ 00:20:24.020 { 00:20:24.020 "name": null, 00:20:24.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.020 "is_configured": false, 00:20:24.020 "data_offset": 2048, 00:20:24.020 "data_size": 63488 00:20:24.020 }, 00:20:24.020 { 00:20:24.020 "name": "BaseBdev2", 00:20:24.020 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:24.020 "is_configured": true, 00:20:24.020 "data_offset": 2048, 00:20:24.020 "data_size": 63488 00:20:24.020 } 00:20:24.020 ] 00:20:24.020 }' 00:20:24.020 01:02:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.020 01:02:58 -- common/autotest_common.sh@10 -- # set +x 00:20:24.587 01:02:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:24.847 [2024-11-18 01:02:59.094809] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:24.847 [2024-11-18 01:02:59.094890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.847 [2024-11-18 01:02:59.102699] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0 00:20:24.847 [2024-11-18 01:02:59.105195] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:24.847 01:02:59 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:25.784 01:03:00 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.784 01:03:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.784 01:03:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:25.784 01:03:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:25.784 01:03:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.784 01:03:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.784 01:03:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.043 01:03:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:26.044 "name": "raid_bdev1", 00:20:26.044 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:26.044 "strip_size_kb": 0, 00:20:26.044 "state": "online", 00:20:26.044 "raid_level": "raid1", 00:20:26.044 "superblock": true, 00:20:26.044 "num_base_bdevs": 2, 00:20:26.044 "num_base_bdevs_discovered": 2, 00:20:26.044 "num_base_bdevs_operational": 2, 00:20:26.044 "process": { 00:20:26.044 "type": "rebuild", 00:20:26.044 "target": "spare", 00:20:26.044 "progress": { 00:20:26.044 "blocks": 24576, 00:20:26.044 "percent": 38 00:20:26.044 } 00:20:26.044 }, 00:20:26.044 "base_bdevs_list": [ 00:20:26.044 { 00:20:26.044 "name": "spare", 00:20:26.044 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:26.044 "is_configured": true, 00:20:26.044 "data_offset": 2048, 00:20:26.044 "data_size": 63488 00:20:26.044 }, 00:20:26.044 { 00:20:26.044 "name": "BaseBdev2", 00:20:26.044 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:26.044 "is_configured": true, 00:20:26.044 "data_offset": 2048, 00:20:26.044 "data_size": 63488 00:20:26.044 } 00:20:26.044 ] 00:20:26.044 }' 00:20:26.044 01:03:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:26.044 01:03:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.044 01:03:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:26.303 01:03:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.303 01:03:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:26.303 [2024-11-18 01:03:00.700008] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.561 [2024-11-18 01:03:00.718160] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:26.561 [2024-11-18 01:03:00.718275] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.561 01:03:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.561 01:03:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:26.561 01:03:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:26.561 01:03:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.561 01:03:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.562 01:03:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:26.562 01:03:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.562 01:03:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.562 01:03:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.562 01:03:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.562 01:03:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.562 01:03:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.820 01:03:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.820 "name": "raid_bdev1", 00:20:26.820 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:26.820 "strip_size_kb": 0, 00:20:26.820 "state": "online", 00:20:26.820 "raid_level": "raid1", 00:20:26.820 "superblock": true, 00:20:26.820 "num_base_bdevs": 2, 00:20:26.820 "num_base_bdevs_discovered": 1, 00:20:26.820 "num_base_bdevs_operational": 1, 00:20:26.820 "base_bdevs_list": [ 00:20:26.820 { 00:20:26.820 "name": null, 00:20:26.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.820 "is_configured": false, 00:20:26.820 "data_offset": 2048, 00:20:26.820 "data_size": 63488 00:20:26.820 }, 00:20:26.820 { 00:20:26.820 "name": "BaseBdev2", 00:20:26.820 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:26.820 "is_configured": true, 00:20:26.821 "data_offset": 2048, 00:20:26.821 "data_size": 63488 00:20:26.821 } 00:20:26.821 ] 00:20:26.821 }' 00:20:26.821 01:03:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.821 01:03:01 -- common/autotest_common.sh@10 -- # set +x 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.389 "name": "raid_bdev1", 00:20:27.389 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:27.389 "strip_size_kb": 0, 00:20:27.389 "state": "online", 00:20:27.389 "raid_level": "raid1", 00:20:27.389 "superblock": true, 00:20:27.389 "num_base_bdevs": 2, 00:20:27.389 "num_base_bdevs_discovered": 1, 00:20:27.389 "num_base_bdevs_operational": 1, 00:20:27.389 "base_bdevs_list": [ 00:20:27.389 { 00:20:27.389 "name": null, 00:20:27.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.389 "is_configured": false, 00:20:27.389 "data_offset": 2048, 00:20:27.389 "data_size": 63488 00:20:27.389 }, 00:20:27.389 { 00:20:27.389 "name": "BaseBdev2", 00:20:27.389 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:27.389 "is_configured": true, 00:20:27.389 "data_offset": 2048, 00:20:27.389 "data_size": 63488 00:20:27.389 } 00:20:27.389 ] 00:20:27.389 }' 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:27.389 01:03:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.648 01:03:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:27.648 01:03:01 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:27.907 [2024-11-18 01:03:02.050927] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:27.907 [2024-11-18 01:03:02.050995] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.907 [2024-11-18 01:03:02.058691] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:20:27.907 [2024-11-18 01:03:02.061090] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.907 01:03:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:28.844 01:03:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.844 01:03:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:28.844 01:03:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:28.844 01:03:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:28.844 01:03:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:28.844 01:03:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.844 01:03:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.103 "name": "raid_bdev1", 00:20:29.103 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:29.103 "strip_size_kb": 0, 00:20:29.103 "state": "online", 00:20:29.103 "raid_level": "raid1", 00:20:29.103 "superblock": true, 00:20:29.103 "num_base_bdevs": 2, 00:20:29.103 "num_base_bdevs_discovered": 2, 00:20:29.103 "num_base_bdevs_operational": 2, 00:20:29.103 "process": { 00:20:29.103 "type": "rebuild", 00:20:29.103 "target": "spare", 00:20:29.103 "progress": { 00:20:29.103 "blocks": 24576, 00:20:29.103 "percent": 38 00:20:29.103 } 00:20:29.103 }, 00:20:29.103 "base_bdevs_list": [ 00:20:29.103 { 00:20:29.103 "name": "spare", 00:20:29.103 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:29.103 "is_configured": true, 00:20:29.103 "data_offset": 2048, 00:20:29.103 "data_size": 63488 00:20:29.103 }, 00:20:29.103 { 00:20:29.103 "name": "BaseBdev2", 00:20:29.103 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:29.103 "is_configured": true, 00:20:29.103 "data_offset": 2048, 00:20:29.103 "data_size": 63488 00:20:29.103 } 00:20:29.103 ] 00:20:29.103 }' 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:29.103 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@657 -- # local timeout=386 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.103 01:03:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.363 01:03:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.363 "name": "raid_bdev1", 00:20:29.363 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:29.363 "strip_size_kb": 0, 00:20:29.363 "state": "online", 00:20:29.363 "raid_level": "raid1", 00:20:29.363 "superblock": true, 00:20:29.363 "num_base_bdevs": 2, 00:20:29.363 "num_base_bdevs_discovered": 2, 00:20:29.363 "num_base_bdevs_operational": 2, 00:20:29.363 "process": { 00:20:29.363 "type": "rebuild", 00:20:29.363 "target": "spare", 00:20:29.363 "progress": { 00:20:29.363 "blocks": 30720, 00:20:29.363 "percent": 48 00:20:29.363 } 00:20:29.363 }, 00:20:29.363 "base_bdevs_list": [ 00:20:29.363 { 00:20:29.363 "name": "spare", 00:20:29.363 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:29.363 "is_configured": true, 00:20:29.363 "data_offset": 2048, 00:20:29.363 "data_size": 63488 00:20:29.363 }, 00:20:29.363 { 00:20:29.363 "name": "BaseBdev2", 00:20:29.363 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:29.363 "is_configured": true, 00:20:29.363 "data_offset": 2048, 00:20:29.363 "data_size": 63488 00:20:29.363 } 00:20:29.363 ] 00:20:29.363 }' 00:20:29.363 01:03:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.363 01:03:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.363 01:03:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.363 01:03:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.363 01:03:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.768 01:03:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.768 "name": "raid_bdev1", 00:20:30.768 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:30.768 "strip_size_kb": 0, 00:20:30.768 "state": "online", 00:20:30.768 "raid_level": "raid1", 00:20:30.768 "superblock": true, 00:20:30.768 "num_base_bdevs": 2, 00:20:30.768 "num_base_bdevs_discovered": 2, 00:20:30.768 "num_base_bdevs_operational": 2, 00:20:30.768 "process": { 00:20:30.768 "type": "rebuild", 00:20:30.768 "target": "spare", 00:20:30.768 "progress": { 00:20:30.768 "blocks": 57344, 00:20:30.768 "percent": 90 00:20:30.768 } 00:20:30.768 }, 00:20:30.768 "base_bdevs_list": [ 00:20:30.768 { 00:20:30.768 "name": "spare", 00:20:30.768 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:30.768 "is_configured": true, 00:20:30.769 "data_offset": 2048, 00:20:30.769 "data_size": 63488 00:20:30.769 }, 00:20:30.769 { 00:20:30.769 "name": "BaseBdev2", 00:20:30.769 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:30.769 "is_configured": true, 00:20:30.769 "data_offset": 2048, 00:20:30.769 "data_size": 63488 00:20:30.769 } 00:20:30.769 ] 00:20:30.769 }' 00:20:30.769 01:03:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.769 01:03:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.769 01:03:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.769 01:03:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.769 01:03:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:31.027 [2024-11-18 01:03:05.183982] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:31.027 [2024-11-18 01:03:05.184092] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:31.027 [2024-11-18 01:03:05.184275] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:31.966 "name": "raid_bdev1", 00:20:31.966 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:31.966 "strip_size_kb": 0, 00:20:31.966 "state": "online", 00:20:31.966 "raid_level": "raid1", 00:20:31.966 "superblock": true, 00:20:31.966 "num_base_bdevs": 2, 00:20:31.966 "num_base_bdevs_discovered": 2, 00:20:31.966 "num_base_bdevs_operational": 2, 00:20:31.966 "base_bdevs_list": [ 00:20:31.966 { 00:20:31.966 "name": "spare", 00:20:31.966 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:31.966 "is_configured": true, 00:20:31.966 "data_offset": 2048, 00:20:31.966 "data_size": 63488 00:20:31.966 }, 00:20:31.966 { 00:20:31.966 "name": "BaseBdev2", 00:20:31.966 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:31.966 "is_configured": true, 00:20:31.966 "data_offset": 2048, 00:20:31.966 "data_size": 63488 00:20:31.966 } 00:20:31.966 ] 00:20:31.966 }' 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:31.966 01:03:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@660 -- # break 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.225 01:03:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.485 "name": "raid_bdev1", 00:20:32.485 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:32.485 "strip_size_kb": 0, 00:20:32.485 "state": "online", 00:20:32.485 "raid_level": "raid1", 00:20:32.485 "superblock": true, 00:20:32.485 "num_base_bdevs": 2, 00:20:32.485 "num_base_bdevs_discovered": 2, 00:20:32.485 "num_base_bdevs_operational": 2, 00:20:32.485 "base_bdevs_list": [ 00:20:32.485 { 00:20:32.485 "name": "spare", 00:20:32.485 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:32.485 "is_configured": true, 00:20:32.485 "data_offset": 2048, 00:20:32.485 "data_size": 63488 00:20:32.485 }, 00:20:32.485 { 00:20:32.485 "name": "BaseBdev2", 00:20:32.485 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:32.485 "is_configured": true, 00:20:32.485 "data_offset": 2048, 00:20:32.485 "data_size": 63488 00:20:32.485 } 00:20:32.485 ] 00:20:32.485 }' 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.485 01:03:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.745 01:03:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.745 "name": "raid_bdev1", 00:20:32.745 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:32.745 "strip_size_kb": 0, 00:20:32.745 "state": "online", 00:20:32.745 "raid_level": "raid1", 00:20:32.745 "superblock": true, 00:20:32.745 "num_base_bdevs": 2, 00:20:32.745 "num_base_bdevs_discovered": 2, 00:20:32.745 "num_base_bdevs_operational": 2, 00:20:32.745 "base_bdevs_list": [ 00:20:32.745 { 00:20:32.745 "name": "spare", 00:20:32.745 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:32.745 "is_configured": true, 00:20:32.745 "data_offset": 2048, 00:20:32.745 "data_size": 63488 00:20:32.745 }, 00:20:32.745 { 00:20:32.745 "name": "BaseBdev2", 00:20:32.745 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:32.745 "is_configured": true, 00:20:32.745 "data_offset": 2048, 00:20:32.745 "data_size": 63488 00:20:32.745 } 00:20:32.745 ] 00:20:32.745 }' 00:20:32.745 01:03:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.745 01:03:06 -- common/autotest_common.sh@10 -- # set +x 00:20:33.314 01:03:07 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:33.573 [2024-11-18 01:03:07.800483] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.573 [2024-11-18 01:03:07.800535] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.573 [2024-11-18 01:03:07.800666] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.573 [2024-11-18 01:03:07.800755] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.573 [2024-11-18 01:03:07.800765] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:20:33.573 01:03:07 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.573 01:03:07 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:33.832 01:03:08 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:33.832 01:03:08 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:33.832 01:03:08 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@12 -- # local i 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:33.832 /dev/nbd0 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:33.832 01:03:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:33.832 01:03:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:33.832 01:03:08 -- common/autotest_common.sh@867 -- # local i 00:20:33.832 01:03:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:33.832 01:03:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:33.832 01:03:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:34.092 01:03:08 -- common/autotest_common.sh@871 -- # break 00:20:34.092 01:03:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:34.092 01:03:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:34.092 01:03:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.092 1+0 records in 00:20:34.092 1+0 records out 00:20:34.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763284 s, 5.4 MB/s 00:20:34.092 01:03:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.092 01:03:08 -- common/autotest_common.sh@884 -- # size=4096 00:20:34.092 01:03:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.092 01:03:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:34.092 01:03:08 -- common/autotest_common.sh@887 -- # return 0 00:20:34.092 01:03:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.092 01:03:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.092 01:03:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:34.351 /dev/nbd1 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:34.351 01:03:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:34.351 01:03:08 -- common/autotest_common.sh@867 -- # local i 00:20:34.351 01:03:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:34.351 01:03:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:34.351 01:03:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:34.351 01:03:08 -- common/autotest_common.sh@871 -- # break 00:20:34.351 01:03:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:34.351 01:03:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:34.351 01:03:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.351 1+0 records in 00:20:34.351 1+0 records out 00:20:34.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049772 s, 8.2 MB/s 00:20:34.351 01:03:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.351 01:03:08 -- common/autotest_common.sh@884 -- # size=4096 00:20:34.351 01:03:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.351 01:03:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:34.351 01:03:08 -- common/autotest_common.sh@887 -- # return 0 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.351 01:03:08 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:34.351 01:03:08 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@51 -- # local i 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.351 01:03:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@41 -- # break 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.610 01:03:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:34.869 01:03:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:34.870 01:03:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:34.870 01:03:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:34.870 01:03:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.870 01:03:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.870 01:03:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:34.870 01:03:09 -- bdev/nbd_common.sh@41 -- # break 00:20:34.870 01:03:09 -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.870 01:03:09 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:34.870 01:03:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:34.870 01:03:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:34.870 01:03:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:35.128 01:03:09 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:35.387 [2024-11-18 01:03:09.555795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:35.387 [2024-11-18 01:03:09.555915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.387 [2024-11-18 01:03:09.555960] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:35.387 [2024-11-18 01:03:09.555992] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.387 [2024-11-18 01:03:09.558854] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.387 [2024-11-18 01:03:09.558933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:35.387 [2024-11-18 01:03:09.559034] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:35.387 [2024-11-18 01:03:09.559112] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.387 BaseBdev1 00:20:35.387 01:03:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:35.387 01:03:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:35.387 01:03:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:35.646 01:03:09 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:35.906 [2024-11-18 01:03:10.087895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:35.906 [2024-11-18 01:03:10.088003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.906 [2024-11-18 01:03:10.088063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:35.906 [2024-11-18 01:03:10.088093] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.906 [2024-11-18 01:03:10.088580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.906 [2024-11-18 01:03:10.088650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:35.906 [2024-11-18 01:03:10.088746] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:35.906 [2024-11-18 01:03:10.088759] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:35.906 [2024-11-18 01:03:10.088768] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.906 [2024-11-18 01:03:10.088795] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:20:35.906 [2024-11-18 01:03:10.088852] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.906 BaseBdev2 00:20:35.906 01:03:10 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:35.906 01:03:10 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:36.164 [2024-11-18 01:03:10.471941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:36.164 [2024-11-18 01:03:10.472053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.164 [2024-11-18 01:03:10.472105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:36.164 [2024-11-18 01:03:10.472132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.164 [2024-11-18 01:03:10.472640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.164 [2024-11-18 01:03:10.472695] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:36.164 [2024-11-18 01:03:10.472795] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:36.164 [2024-11-18 01:03:10.472833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.164 spare 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.165 01:03:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.424 [2024-11-18 01:03:10.572938] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:20:36.424 [2024-11-18 01:03:10.572981] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:36.424 [2024-11-18 01:03:10.573201] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:20:36.424 [2024-11-18 01:03:10.573667] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:20:36.424 [2024-11-18 01:03:10.573689] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:20:36.424 [2024-11-18 01:03:10.573825] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.424 01:03:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.424 "name": "raid_bdev1", 00:20:36.424 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:36.424 "strip_size_kb": 0, 00:20:36.424 "state": "online", 00:20:36.424 "raid_level": "raid1", 00:20:36.424 "superblock": true, 00:20:36.424 "num_base_bdevs": 2, 00:20:36.424 "num_base_bdevs_discovered": 2, 00:20:36.424 "num_base_bdevs_operational": 2, 00:20:36.424 "base_bdevs_list": [ 00:20:36.424 { 00:20:36.424 "name": "spare", 00:20:36.424 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:36.424 "is_configured": true, 00:20:36.424 "data_offset": 2048, 00:20:36.424 "data_size": 63488 00:20:36.424 }, 00:20:36.424 { 00:20:36.424 "name": "BaseBdev2", 00:20:36.424 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:36.424 "is_configured": true, 00:20:36.424 "data_offset": 2048, 00:20:36.424 "data_size": 63488 00:20:36.424 } 00:20:36.424 ] 00:20:36.424 }' 00:20:36.424 01:03:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.424 01:03:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.993 01:03:11 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.993 01:03:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.993 01:03:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:36.993 01:03:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:36.993 01:03:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.993 01:03:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.993 01:03:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.253 01:03:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:37.253 "name": "raid_bdev1", 00:20:37.253 "uuid": "58bdd132-cf1a-49b4-b473-adacb3b612f8", 00:20:37.253 "strip_size_kb": 0, 00:20:37.253 "state": "online", 00:20:37.253 "raid_level": "raid1", 00:20:37.253 "superblock": true, 00:20:37.253 "num_base_bdevs": 2, 00:20:37.253 "num_base_bdevs_discovered": 2, 00:20:37.253 "num_base_bdevs_operational": 2, 00:20:37.253 "base_bdevs_list": [ 00:20:37.253 { 00:20:37.253 "name": "spare", 00:20:37.253 "uuid": "2c195f4f-cc48-5462-9c88-5d945195b0cb", 00:20:37.253 "is_configured": true, 00:20:37.253 "data_offset": 2048, 00:20:37.253 "data_size": 63488 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "name": "BaseBdev2", 00:20:37.253 "uuid": "3f89f7ea-e8dd-53d6-b44b-953465737d71", 00:20:37.253 "is_configured": true, 00:20:37.253 "data_offset": 2048, 00:20:37.253 "data_size": 63488 00:20:37.253 } 00:20:37.253 ] 00:20:37.253 }' 00:20:37.253 01:03:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:37.253 01:03:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:37.253 01:03:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:37.253 01:03:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:37.253 01:03:11 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.253 01:03:11 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:37.512 01:03:11 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.512 01:03:11 -- bdev/bdev_raid.sh@709 -- # killprocess 133576 00:20:37.512 01:03:11 -- common/autotest_common.sh@936 -- # '[' -z 133576 ']' 00:20:37.512 01:03:11 -- common/autotest_common.sh@940 -- # kill -0 133576 00:20:37.512 01:03:11 -- common/autotest_common.sh@941 -- # uname 00:20:37.512 01:03:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.512 01:03:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133576 00:20:37.512 killing process with pid 133576 00:20:37.512 Received shutdown signal, test time was about 60.000000 seconds 00:20:37.512 00:20:37.512 Latency(us) 00:20:37.512 [2024-11-18T01:03:11.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.512 [2024-11-18T01:03:11.911Z] =================================================================================================================== 00:20:37.512 [2024-11-18T01:03:11.911Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.512 01:03:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:37.512 01:03:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:37.512 01:03:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133576' 00:20:37.512 01:03:11 -- common/autotest_common.sh@955 -- # kill 133576 00:20:37.512 01:03:11 -- common/autotest_common.sh@960 -- # wait 133576 00:20:37.512 [2024-11-18 01:03:11.802252] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.512 [2024-11-18 01:03:11.802389] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.512 [2024-11-18 01:03:11.802466] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.512 [2024-11-18 01:03:11.802476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:20:37.512 [2024-11-18 01:03:11.859499] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:38.081 ************************************ 00:20:38.081 END TEST raid_rebuild_test_sb 00:20:38.081 ************************************ 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:38.081 00:20:38.081 real 0m23.450s 00:20:38.081 user 0m32.992s 00:20:38.081 sys 0m5.243s 00:20:38.081 01:03:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:38.081 01:03:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:38.081 01:03:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:38.081 01:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.081 01:03:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.081 ************************************ 00:20:38.081 START TEST raid_rebuild_test_io 00:20:38.081 ************************************ 00:20:38.081 01:03:12 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@544 -- # raid_pid=134184 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:38.081 01:03:12 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134184 /var/tmp/spdk-raid.sock 00:20:38.081 01:03:12 -- common/autotest_common.sh@829 -- # '[' -z 134184 ']' 00:20:38.081 01:03:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:38.081 01:03:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:38.081 01:03:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:38.081 01:03:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.081 01:03:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.081 [2024-11-18 01:03:12.425938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:38.081 [2024-11-18 01:03:12.426169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134184 ] 00:20:38.081 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:38.081 Zero copy mechanism will not be used. 00:20:38.341 [2024-11-18 01:03:12.570430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.341 [2024-11-18 01:03:12.650848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.341 [2024-11-18 01:03:12.731030] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.278 01:03:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.278 01:03:13 -- common/autotest_common.sh@862 -- # return 0 00:20:39.278 01:03:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:39.278 01:03:13 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:39.278 01:03:13 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:39.278 BaseBdev1 00:20:39.278 01:03:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:39.278 01:03:13 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:39.278 01:03:13 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:39.538 BaseBdev2 00:20:39.538 01:03:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:39.796 spare_malloc 00:20:39.796 01:03:14 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:40.055 spare_delay 00:20:40.055 01:03:14 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:40.313 [2024-11-18 01:03:14.534227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:40.313 [2024-11-18 01:03:14.534505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.313 [2024-11-18 01:03:14.534580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:40.313 [2024-11-18 01:03:14.534708] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.313 [2024-11-18 01:03:14.537843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.313 [2024-11-18 01:03:14.538031] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:40.313 spare 00:20:40.313 01:03:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:40.572 [2024-11-18 01:03:14.774549] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.572 [2024-11-18 01:03:14.777349] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.572 [2024-11-18 01:03:14.777584] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:40.572 [2024-11-18 01:03:14.777624] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:40.572 [2024-11-18 01:03:14.777930] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:20:40.572 [2024-11-18 01:03:14.778519] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:40.572 [2024-11-18 01:03:14.778634] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:20:40.572 [2024-11-18 01:03:14.778989] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.572 01:03:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.831 01:03:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.831 "name": "raid_bdev1", 00:20:40.831 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:40.831 "strip_size_kb": 0, 00:20:40.831 "state": "online", 00:20:40.831 "raid_level": "raid1", 00:20:40.831 "superblock": false, 00:20:40.831 "num_base_bdevs": 2, 00:20:40.831 "num_base_bdevs_discovered": 2, 00:20:40.831 "num_base_bdevs_operational": 2, 00:20:40.831 "base_bdevs_list": [ 00:20:40.831 { 00:20:40.831 "name": "BaseBdev1", 00:20:40.831 "uuid": "3dc64a1d-ea7d-45d2-a90a-53cd9dbc25c9", 00:20:40.831 "is_configured": true, 00:20:40.831 "data_offset": 0, 00:20:40.831 "data_size": 65536 00:20:40.831 }, 00:20:40.831 { 00:20:40.831 "name": "BaseBdev2", 00:20:40.831 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:40.831 "is_configured": true, 00:20:40.831 "data_offset": 0, 00:20:40.831 "data_size": 65536 00:20:40.831 } 00:20:40.831 ] 00:20:40.831 }' 00:20:40.831 01:03:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.831 01:03:14 -- common/autotest_common.sh@10 -- # set +x 00:20:41.400 01:03:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:41.400 01:03:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:41.400 [2024-11-18 01:03:15.783472] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.659 01:03:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:41.659 01:03:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.659 01:03:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:41.919 [2024-11-18 01:03:16.182998] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:20:41.919 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:41.919 Zero copy mechanism will not be used. 00:20:41.919 Running I/O for 60 seconds... 00:20:41.919 [2024-11-18 01:03:16.241036] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.919 [2024-11-18 01:03:16.247206] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.919 01:03:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.180 01:03:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.180 "name": "raid_bdev1", 00:20:42.180 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:42.180 "strip_size_kb": 0, 00:20:42.180 "state": "online", 00:20:42.180 "raid_level": "raid1", 00:20:42.180 "superblock": false, 00:20:42.180 "num_base_bdevs": 2, 00:20:42.180 "num_base_bdevs_discovered": 1, 00:20:42.180 "num_base_bdevs_operational": 1, 00:20:42.180 "base_bdevs_list": [ 00:20:42.180 { 00:20:42.180 "name": null, 00:20:42.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.180 "is_configured": false, 00:20:42.180 "data_offset": 0, 00:20:42.180 "data_size": 65536 00:20:42.180 }, 00:20:42.180 { 00:20:42.180 "name": "BaseBdev2", 00:20:42.180 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:42.180 "is_configured": true, 00:20:42.180 "data_offset": 0, 00:20:42.180 "data_size": 65536 00:20:42.180 } 00:20:42.180 ] 00:20:42.180 }' 00:20:42.180 01:03:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.180 01:03:16 -- common/autotest_common.sh@10 -- # set +x 00:20:42.748 01:03:17 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:43.008 [2024-11-18 01:03:17.404765] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:43.008 [2024-11-18 01:03:17.405102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.266 [2024-11-18 01:03:17.438683] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:43.266 [2024-11-18 01:03:17.441519] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.266 01:03:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:43.266 [2024-11-18 01:03:17.572625] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:43.266 [2024-11-18 01:03:17.573541] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:43.524 [2024-11-18 01:03:17.688891] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:43.524 [2024-11-18 01:03:17.689503] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:43.782 [2024-11-18 01:03:18.009063] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:43.782 [2024-11-18 01:03:18.009992] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:44.040 [2024-11-18 01:03:18.217695] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:44.298 01:03:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.298 01:03:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:44.298 01:03:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:44.298 01:03:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:44.298 01:03:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:44.298 01:03:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.298 01:03:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.298 [2024-11-18 01:03:18.557353] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:44.601 01:03:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:44.601 "name": "raid_bdev1", 00:20:44.601 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:44.601 "strip_size_kb": 0, 00:20:44.601 "state": "online", 00:20:44.601 "raid_level": "raid1", 00:20:44.601 "superblock": false, 00:20:44.601 "num_base_bdevs": 2, 00:20:44.601 "num_base_bdevs_discovered": 2, 00:20:44.601 "num_base_bdevs_operational": 2, 00:20:44.601 "process": { 00:20:44.601 "type": "rebuild", 00:20:44.601 "target": "spare", 00:20:44.601 "progress": { 00:20:44.601 "blocks": 16384, 00:20:44.601 "percent": 25 00:20:44.601 } 00:20:44.601 }, 00:20:44.601 "base_bdevs_list": [ 00:20:44.601 { 00:20:44.601 "name": "spare", 00:20:44.601 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:44.601 "is_configured": true, 00:20:44.601 "data_offset": 0, 00:20:44.601 "data_size": 65536 00:20:44.601 }, 00:20:44.601 { 00:20:44.601 "name": "BaseBdev2", 00:20:44.601 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:44.601 "is_configured": true, 00:20:44.601 "data_offset": 0, 00:20:44.601 "data_size": 65536 00:20:44.601 } 00:20:44.601 ] 00:20:44.601 }' 00:20:44.601 01:03:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:44.601 01:03:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.601 01:03:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:44.601 01:03:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.601 01:03:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:44.877 [2024-11-18 01:03:19.037485] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.877 [2024-11-18 01:03:19.164439] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:44.877 [2024-11-18 01:03:19.167819] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.877 [2024-11-18 01:03:19.183971] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.877 01:03:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.136 01:03:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:45.136 "name": "raid_bdev1", 00:20:45.136 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:45.136 "strip_size_kb": 0, 00:20:45.136 "state": "online", 00:20:45.136 "raid_level": "raid1", 00:20:45.136 "superblock": false, 00:20:45.136 "num_base_bdevs": 2, 00:20:45.136 "num_base_bdevs_discovered": 1, 00:20:45.136 "num_base_bdevs_operational": 1, 00:20:45.136 "base_bdevs_list": [ 00:20:45.136 { 00:20:45.136 "name": null, 00:20:45.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.136 "is_configured": false, 00:20:45.136 "data_offset": 0, 00:20:45.136 "data_size": 65536 00:20:45.136 }, 00:20:45.136 { 00:20:45.136 "name": "BaseBdev2", 00:20:45.136 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:45.136 "is_configured": true, 00:20:45.136 "data_offset": 0, 00:20:45.136 "data_size": 65536 00:20:45.136 } 00:20:45.136 ] 00:20:45.136 }' 00:20:45.136 01:03:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:45.136 01:03:19 -- common/autotest_common.sh@10 -- # set +x 00:20:45.705 01:03:20 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.705 01:03:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:45.705 01:03:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:45.705 01:03:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:45.705 01:03:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:45.705 01:03:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.705 01:03:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.965 01:03:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.965 "name": "raid_bdev1", 00:20:45.965 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:45.965 "strip_size_kb": 0, 00:20:45.965 "state": "online", 00:20:45.965 "raid_level": "raid1", 00:20:45.965 "superblock": false, 00:20:45.965 "num_base_bdevs": 2, 00:20:45.965 "num_base_bdevs_discovered": 1, 00:20:45.965 "num_base_bdevs_operational": 1, 00:20:45.965 "base_bdevs_list": [ 00:20:45.965 { 00:20:45.965 "name": null, 00:20:45.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.965 "is_configured": false, 00:20:45.965 "data_offset": 0, 00:20:45.965 "data_size": 65536 00:20:45.965 }, 00:20:45.965 { 00:20:45.965 "name": "BaseBdev2", 00:20:45.965 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:45.965 "is_configured": true, 00:20:45.965 "data_offset": 0, 00:20:45.965 "data_size": 65536 00:20:45.965 } 00:20:45.965 ] 00:20:45.965 }' 00:20:45.965 01:03:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.965 01:03:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:45.965 01:03:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:46.224 01:03:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:46.224 01:03:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.484 [2024-11-18 01:03:20.641362] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:46.484 [2024-11-18 01:03:20.641439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.484 01:03:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:46.484 [2024-11-18 01:03:20.703180] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:46.484 [2024-11-18 01:03:20.705600] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.484 [2024-11-18 01:03:20.820945] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:46.484 [2024-11-18 01:03:20.821633] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:46.743 [2024-11-18 01:03:21.024149] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.743 [2024-11-18 01:03:21.024525] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:47.002 [2024-11-18 01:03:21.366987] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:47.571 "name": "raid_bdev1", 00:20:47.571 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:47.571 "strip_size_kb": 0, 00:20:47.571 "state": "online", 00:20:47.571 "raid_level": "raid1", 00:20:47.571 "superblock": false, 00:20:47.571 "num_base_bdevs": 2, 00:20:47.571 "num_base_bdevs_discovered": 2, 00:20:47.571 "num_base_bdevs_operational": 2, 00:20:47.571 "process": { 00:20:47.571 "type": "rebuild", 00:20:47.571 "target": "spare", 00:20:47.571 "progress": { 00:20:47.571 "blocks": 16384, 00:20:47.571 "percent": 25 00:20:47.571 } 00:20:47.571 }, 00:20:47.571 "base_bdevs_list": [ 00:20:47.571 { 00:20:47.571 "name": "spare", 00:20:47.571 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:47.571 "is_configured": true, 00:20:47.571 "data_offset": 0, 00:20:47.571 "data_size": 65536 00:20:47.571 }, 00:20:47.571 { 00:20:47.571 "name": "BaseBdev2", 00:20:47.571 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:47.571 "is_configured": true, 00:20:47.571 "data_offset": 0, 00:20:47.571 "data_size": 65536 00:20:47.571 } 00:20:47.571 ] 00:20:47.571 }' 00:20:47.571 01:03:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:47.831 01:03:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.831 01:03:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@657 -- # local timeout=405 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.831 01:03:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.831 [2024-11-18 01:03:22.123746] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:48.090 01:03:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.090 "name": "raid_bdev1", 00:20:48.090 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:48.090 "strip_size_kb": 0, 00:20:48.090 "state": "online", 00:20:48.090 "raid_level": "raid1", 00:20:48.090 "superblock": false, 00:20:48.090 "num_base_bdevs": 2, 00:20:48.090 "num_base_bdevs_discovered": 2, 00:20:48.090 "num_base_bdevs_operational": 2, 00:20:48.090 "process": { 00:20:48.090 "type": "rebuild", 00:20:48.090 "target": "spare", 00:20:48.090 "progress": { 00:20:48.090 "blocks": 20480, 00:20:48.090 "percent": 31 00:20:48.090 } 00:20:48.090 }, 00:20:48.090 "base_bdevs_list": [ 00:20:48.090 { 00:20:48.090 "name": "spare", 00:20:48.090 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:48.090 "is_configured": true, 00:20:48.090 "data_offset": 0, 00:20:48.090 "data_size": 65536 00:20:48.090 }, 00:20:48.090 { 00:20:48.090 "name": "BaseBdev2", 00:20:48.090 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:48.090 "is_configured": true, 00:20:48.090 "data_offset": 0, 00:20:48.090 "data_size": 65536 00:20:48.090 } 00:20:48.090 ] 00:20:48.090 }' 00:20:48.090 01:03:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.090 01:03:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.090 01:03:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.091 [2024-11-18 01:03:22.339938] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:48.091 01:03:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.091 01:03:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:48.350 [2024-11-18 01:03:22.555866] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:48.611 [2024-11-18 01:03:22.785720] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:48.870 [2024-11-18 01:03:23.215028] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:48.870 [2024-11-18 01:03:23.215415] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.129 01:03:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.129 [2024-11-18 01:03:23.448969] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:49.390 01:03:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:49.390 "name": "raid_bdev1", 00:20:49.390 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:49.390 "strip_size_kb": 0, 00:20:49.390 "state": "online", 00:20:49.390 "raid_level": "raid1", 00:20:49.390 "superblock": false, 00:20:49.390 "num_base_bdevs": 2, 00:20:49.390 "num_base_bdevs_discovered": 2, 00:20:49.390 "num_base_bdevs_operational": 2, 00:20:49.390 "process": { 00:20:49.390 "type": "rebuild", 00:20:49.390 "target": "spare", 00:20:49.390 "progress": { 00:20:49.390 "blocks": 38912, 00:20:49.390 "percent": 59 00:20:49.390 } 00:20:49.390 }, 00:20:49.390 "base_bdevs_list": [ 00:20:49.390 { 00:20:49.390 "name": "spare", 00:20:49.390 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:49.390 "is_configured": true, 00:20:49.390 "data_offset": 0, 00:20:49.390 "data_size": 65536 00:20:49.390 }, 00:20:49.390 { 00:20:49.390 "name": "BaseBdev2", 00:20:49.390 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:49.390 "is_configured": true, 00:20:49.390 "data_offset": 0, 00:20:49.390 "data_size": 65536 00:20:49.390 } 00:20:49.390 ] 00:20:49.390 }' 00:20:49.390 01:03:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:49.390 [2024-11-18 01:03:23.658099] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:49.390 [2024-11-18 01:03:23.658476] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:49.390 01:03:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.390 01:03:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:49.390 01:03:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.390 01:03:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:49.650 [2024-11-18 01:03:24.004704] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:49.910 [2024-11-18 01:03:24.237489] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.479 01:03:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.479 [2024-11-18 01:03:24.796991] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:50.748 01:03:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.748 "name": "raid_bdev1", 00:20:50.749 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:50.749 "strip_size_kb": 0, 00:20:50.749 "state": "online", 00:20:50.749 "raid_level": "raid1", 00:20:50.749 "superblock": false, 00:20:50.749 "num_base_bdevs": 2, 00:20:50.749 "num_base_bdevs_discovered": 2, 00:20:50.749 "num_base_bdevs_operational": 2, 00:20:50.749 "process": { 00:20:50.749 "type": "rebuild", 00:20:50.749 "target": "spare", 00:20:50.749 "progress": { 00:20:50.749 "blocks": 61440, 00:20:50.749 "percent": 93 00:20:50.749 } 00:20:50.749 }, 00:20:50.749 "base_bdevs_list": [ 00:20:50.749 { 00:20:50.749 "name": "spare", 00:20:50.749 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:50.749 "is_configured": true, 00:20:50.749 "data_offset": 0, 00:20:50.749 "data_size": 65536 00:20:50.749 }, 00:20:50.749 { 00:20:50.749 "name": "BaseBdev2", 00:20:50.749 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:50.749 "is_configured": true, 00:20:50.749 "data_offset": 0, 00:20:50.749 "data_size": 65536 00:20:50.749 } 00:20:50.749 ] 00:20:50.749 }' 00:20:50.749 01:03:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.749 01:03:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.749 01:03:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.749 01:03:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.749 01:03:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:50.749 [2024-11-18 01:03:25.130803] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:51.010 [2024-11-18 01:03:25.236731] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:51.010 [2024-11-18 01:03:25.239468] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.948 "name": "raid_bdev1", 00:20:51.948 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:51.948 "strip_size_kb": 0, 00:20:51.948 "state": "online", 00:20:51.948 "raid_level": "raid1", 00:20:51.948 "superblock": false, 00:20:51.948 "num_base_bdevs": 2, 00:20:51.948 "num_base_bdevs_discovered": 2, 00:20:51.948 "num_base_bdevs_operational": 2, 00:20:51.948 "base_bdevs_list": [ 00:20:51.948 { 00:20:51.948 "name": "spare", 00:20:51.948 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:51.948 "is_configured": true, 00:20:51.948 "data_offset": 0, 00:20:51.948 "data_size": 65536 00:20:51.948 }, 00:20:51.948 { 00:20:51.948 "name": "BaseBdev2", 00:20:51.948 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:51.948 "is_configured": true, 00:20:51.948 "data_offset": 0, 00:20:51.948 "data_size": 65536 00:20:51.948 } 00:20:51.948 ] 00:20:51.948 }' 00:20:51.948 01:03:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@660 -- # break 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.208 01:03:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:52.468 "name": "raid_bdev1", 00:20:52.468 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:52.468 "strip_size_kb": 0, 00:20:52.468 "state": "online", 00:20:52.468 "raid_level": "raid1", 00:20:52.468 "superblock": false, 00:20:52.468 "num_base_bdevs": 2, 00:20:52.468 "num_base_bdevs_discovered": 2, 00:20:52.468 "num_base_bdevs_operational": 2, 00:20:52.468 "base_bdevs_list": [ 00:20:52.468 { 00:20:52.468 "name": "spare", 00:20:52.468 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:52.468 "is_configured": true, 00:20:52.468 "data_offset": 0, 00:20:52.468 "data_size": 65536 00:20:52.468 }, 00:20:52.468 { 00:20:52.468 "name": "BaseBdev2", 00:20:52.468 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:52.468 "is_configured": true, 00:20:52.468 "data_offset": 0, 00:20:52.468 "data_size": 65536 00:20:52.468 } 00:20:52.468 ] 00:20:52.468 }' 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.468 01:03:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.728 01:03:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.728 "name": "raid_bdev1", 00:20:52.728 "uuid": "70c30f29-934f-46ac-b11c-1c94f4655c76", 00:20:52.728 "strip_size_kb": 0, 00:20:52.728 "state": "online", 00:20:52.728 "raid_level": "raid1", 00:20:52.728 "superblock": false, 00:20:52.728 "num_base_bdevs": 2, 00:20:52.728 "num_base_bdevs_discovered": 2, 00:20:52.728 "num_base_bdevs_operational": 2, 00:20:52.728 "base_bdevs_list": [ 00:20:52.728 { 00:20:52.728 "name": "spare", 00:20:52.728 "uuid": "6d13f627-9317-58d7-a3c5-c12951c92933", 00:20:52.728 "is_configured": true, 00:20:52.728 "data_offset": 0, 00:20:52.728 "data_size": 65536 00:20:52.728 }, 00:20:52.728 { 00:20:52.728 "name": "BaseBdev2", 00:20:52.728 "uuid": "7e782d80-04d2-404b-bbb4-8f70b37a97e3", 00:20:52.728 "is_configured": true, 00:20:52.728 "data_offset": 0, 00:20:52.728 "data_size": 65536 00:20:52.728 } 00:20:52.728 ] 00:20:52.728 }' 00:20:52.728 01:03:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.728 01:03:26 -- common/autotest_common.sh@10 -- # set +x 00:20:53.297 01:03:27 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:53.557 [2024-11-18 01:03:27.735948] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.557 [2024-11-18 01:03:27.736000] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.557 00:20:53.557 Latency(us) 00:20:53.557 [2024-11-18T01:03:27.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.557 [2024-11-18T01:03:27.956Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:53.557 raid_bdev1 : 11.64 98.21 294.64 0.00 0.00 14311.46 354.99 117839.97 00:20:53.557 [2024-11-18T01:03:27.956Z] =================================================================================================================== 00:20:53.557 [2024-11-18T01:03:27.956Z] Total : 98.21 294.64 0.00 0.00 14311.46 354.99 117839.97 00:20:53.557 [2024-11-18 01:03:27.829029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.557 [2024-11-18 01:03:27.829100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.557 [2024-11-18 01:03:27.829188] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.557 [2024-11-18 01:03:27.829200] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:20:53.557 0 00:20:53.557 01:03:27 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.557 01:03:27 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:53.817 01:03:28 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:53.817 01:03:28 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:53.817 01:03:28 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@12 -- # local i 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.817 01:03:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:54.077 /dev/nbd0 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:54.077 01:03:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:54.077 01:03:28 -- common/autotest_common.sh@867 -- # local i 00:20:54.077 01:03:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:54.077 01:03:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:54.077 01:03:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:54.077 01:03:28 -- common/autotest_common.sh@871 -- # break 00:20:54.077 01:03:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:54.077 01:03:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:54.077 01:03:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:54.077 1+0 records in 00:20:54.077 1+0 records out 00:20:54.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486686 s, 8.4 MB/s 00:20:54.077 01:03:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.077 01:03:28 -- common/autotest_common.sh@884 -- # size=4096 00:20:54.077 01:03:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.077 01:03:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:54.077 01:03:28 -- common/autotest_common.sh@887 -- # return 0 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:54.077 01:03:28 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:54.077 01:03:28 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:54.077 01:03:28 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@12 -- # local i 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:54.077 01:03:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:54.335 /dev/nbd1 00:20:54.335 01:03:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:54.335 01:03:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:54.335 01:03:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:54.335 01:03:28 -- common/autotest_common.sh@867 -- # local i 00:20:54.335 01:03:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:54.336 01:03:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:54.336 01:03:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:54.336 01:03:28 -- common/autotest_common.sh@871 -- # break 00:20:54.336 01:03:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:54.336 01:03:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:54.336 01:03:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:54.336 1+0 records in 00:20:54.336 1+0 records out 00:20:54.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059545 s, 6.9 MB/s 00:20:54.336 01:03:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.336 01:03:28 -- common/autotest_common.sh@884 -- # size=4096 00:20:54.336 01:03:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.336 01:03:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:54.336 01:03:28 -- common/autotest_common.sh@887 -- # return 0 00:20:54.336 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:54.336 01:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:54.336 01:03:28 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:54.595 01:03:28 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:54.595 01:03:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:54.595 01:03:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:54.595 01:03:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.595 01:03:28 -- bdev/nbd_common.sh@51 -- # local i 00:20:54.595 01:03:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.595 01:03:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@41 -- # break 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.855 01:03:29 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@51 -- # local i 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.855 01:03:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@41 -- # break 00:20:55.114 01:03:29 -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.114 01:03:29 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:55.114 01:03:29 -- bdev/bdev_raid.sh@709 -- # killprocess 134184 00:20:55.114 01:03:29 -- common/autotest_common.sh@936 -- # '[' -z 134184 ']' 00:20:55.114 01:03:29 -- common/autotest_common.sh@940 -- # kill -0 134184 00:20:55.114 01:03:29 -- common/autotest_common.sh@941 -- # uname 00:20:55.114 01:03:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.114 01:03:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134184 00:20:55.114 01:03:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:55.114 01:03:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:55.114 01:03:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134184' 00:20:55.114 killing process with pid 134184 00:20:55.114 01:03:29 -- common/autotest_common.sh@955 -- # kill 134184 00:20:55.114 Received shutdown signal, test time was about 13.168771 seconds 00:20:55.114 00:20:55.114 Latency(us) 00:20:55.114 [2024-11-18T01:03:29.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.114 [2024-11-18T01:03:29.513Z] =================================================================================================================== 00:20:55.114 [2024-11-18T01:03:29.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.114 [2024-11-18 01:03:29.354855] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:55.114 01:03:29 -- common/autotest_common.sh@960 -- # wait 134184 00:20:55.114 [2024-11-18 01:03:29.403155] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:55.682 00:20:55.682 real 0m17.458s 00:20:55.682 user 0m26.335s 00:20:55.682 sys 0m2.619s 00:20:55.682 01:03:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:55.682 01:03:29 -- common/autotest_common.sh@10 -- # set +x 00:20:55.682 ************************************ 00:20:55.682 END TEST raid_rebuild_test_io 00:20:55.682 ************************************ 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:55.682 01:03:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:55.682 01:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.682 01:03:29 -- common/autotest_common.sh@10 -- # set +x 00:20:55.682 ************************************ 00:20:55.682 START TEST raid_rebuild_test_sb_io 00:20:55.682 ************************************ 00:20:55.682 01:03:29 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@544 -- # raid_pid=134655 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134655 /var/tmp/spdk-raid.sock 00:20:55.682 01:03:29 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:55.682 01:03:29 -- common/autotest_common.sh@829 -- # '[' -z 134655 ']' 00:20:55.682 01:03:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:55.683 01:03:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.683 01:03:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:55.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:55.683 01:03:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.683 01:03:29 -- common/autotest_common.sh@10 -- # set +x 00:20:55.683 [2024-11-18 01:03:29.978672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:55.683 [2024-11-18 01:03:29.978976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134655 ] 00:20:55.683 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:55.683 Zero copy mechanism will not be used. 00:20:55.942 [2024-11-18 01:03:30.132015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.942 [2024-11-18 01:03:30.213315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.942 [2024-11-18 01:03:30.291770] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.879 01:03:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.879 01:03:30 -- common/autotest_common.sh@862 -- # return 0 00:20:56.879 01:03:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:56.879 01:03:30 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:56.879 01:03:30 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:56.879 BaseBdev1_malloc 00:20:56.879 01:03:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:57.138 [2024-11-18 01:03:31.376598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:57.138 [2024-11-18 01:03:31.376735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.138 [2024-11-18 01:03:31.376787] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:57.138 [2024-11-18 01:03:31.376846] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.138 [2024-11-18 01:03:31.379795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.138 [2024-11-18 01:03:31.379872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:57.138 BaseBdev1 00:20:57.138 01:03:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:57.138 01:03:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:57.138 01:03:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:57.397 BaseBdev2_malloc 00:20:57.397 01:03:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:57.667 [2024-11-18 01:03:31.825042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:57.667 [2024-11-18 01:03:31.825179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.667 [2024-11-18 01:03:31.825225] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:57.667 [2024-11-18 01:03:31.825276] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.667 [2024-11-18 01:03:31.828123] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.667 [2024-11-18 01:03:31.828183] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:57.667 BaseBdev2 00:20:57.667 01:03:31 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:57.667 spare_malloc 00:20:57.939 01:03:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:57.939 spare_delay 00:20:57.939 01:03:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:58.199 [2024-11-18 01:03:32.506117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:58.199 [2024-11-18 01:03:32.506233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.199 [2024-11-18 01:03:32.506281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:58.199 [2024-11-18 01:03:32.506328] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.199 [2024-11-18 01:03:32.509136] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.199 [2024-11-18 01:03:32.509197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:58.199 spare 00:20:58.199 01:03:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:58.458 [2024-11-18 01:03:32.758302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:58.458 [2024-11-18 01:03:32.760799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:58.458 [2024-11-18 01:03:32.761028] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:20:58.458 [2024-11-18 01:03:32.761040] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:58.458 [2024-11-18 01:03:32.761225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:58.458 [2024-11-18 01:03:32.761681] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:20:58.459 [2024-11-18 01:03:32.761701] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:20:58.459 [2024-11-18 01:03:32.761861] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.459 01:03:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.718 01:03:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.718 "name": "raid_bdev1", 00:20:58.718 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:20:58.718 "strip_size_kb": 0, 00:20:58.718 "state": "online", 00:20:58.718 "raid_level": "raid1", 00:20:58.718 "superblock": true, 00:20:58.718 "num_base_bdevs": 2, 00:20:58.718 "num_base_bdevs_discovered": 2, 00:20:58.718 "num_base_bdevs_operational": 2, 00:20:58.718 "base_bdevs_list": [ 00:20:58.718 { 00:20:58.718 "name": "BaseBdev1", 00:20:58.718 "uuid": "27d1f2f6-bc6e-5cb9-a17d-e76bfb13fa0c", 00:20:58.718 "is_configured": true, 00:20:58.718 "data_offset": 2048, 00:20:58.718 "data_size": 63488 00:20:58.718 }, 00:20:58.718 { 00:20:58.718 "name": "BaseBdev2", 00:20:58.718 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:20:58.718 "is_configured": true, 00:20:58.718 "data_offset": 2048, 00:20:58.718 "data_size": 63488 00:20:58.718 } 00:20:58.718 ] 00:20:58.718 }' 00:20:58.718 01:03:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.718 01:03:33 -- common/autotest_common.sh@10 -- # set +x 00:20:59.286 01:03:33 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:59.286 01:03:33 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:59.545 [2024-11-18 01:03:33.806553] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.545 01:03:33 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:59.545 01:03:33 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:59.545 01:03:33 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.805 01:03:34 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:59.805 01:03:34 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:59.805 01:03:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:59.805 01:03:34 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:59.805 [2024-11-18 01:03:34.106010] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:20:59.805 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:59.805 Zero copy mechanism will not be used. 00:20:59.805 Running I/O for 60 seconds... 00:21:00.065 [2024-11-18 01:03:34.275714] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:00.065 [2024-11-18 01:03:34.276000] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.065 01:03:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.324 01:03:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.324 "name": "raid_bdev1", 00:21:00.324 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:00.324 "strip_size_kb": 0, 00:21:00.324 "state": "online", 00:21:00.324 "raid_level": "raid1", 00:21:00.324 "superblock": true, 00:21:00.324 "num_base_bdevs": 2, 00:21:00.324 "num_base_bdevs_discovered": 1, 00:21:00.324 "num_base_bdevs_operational": 1, 00:21:00.324 "base_bdevs_list": [ 00:21:00.324 { 00:21:00.324 "name": null, 00:21:00.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.324 "is_configured": false, 00:21:00.324 "data_offset": 2048, 00:21:00.324 "data_size": 63488 00:21:00.324 }, 00:21:00.324 { 00:21:00.324 "name": "BaseBdev2", 00:21:00.324 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:00.324 "is_configured": true, 00:21:00.324 "data_offset": 2048, 00:21:00.325 "data_size": 63488 00:21:00.325 } 00:21:00.325 ] 00:21:00.325 }' 00:21:00.325 01:03:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.325 01:03:34 -- common/autotest_common.sh@10 -- # set +x 00:21:00.894 01:03:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.153 [2024-11-18 01:03:35.299532] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:01.153 [2024-11-18 01:03:35.299608] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.153 01:03:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:01.153 [2024-11-18 01:03:35.345959] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:01.153 [2024-11-18 01:03:35.348513] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.153 [2024-11-18 01:03:35.452566] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.153 [2024-11-18 01:03:35.453184] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.413 [2024-11-18 01:03:35.686330] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.413 [2024-11-18 01:03:35.686706] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.672 [2024-11-18 01:03:36.030637] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:01.932 [2024-11-18 01:03:36.140675] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:01.932 [2024-11-18 01:03:36.141049] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.191 01:03:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.191 01:03:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.191 01:03:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.191 01:03:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.191 01:03:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.192 01:03:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.192 01:03:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.451 01:03:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:02.451 "name": "raid_bdev1", 00:21:02.451 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:02.451 "strip_size_kb": 0, 00:21:02.451 "state": "online", 00:21:02.451 "raid_level": "raid1", 00:21:02.451 "superblock": true, 00:21:02.451 "num_base_bdevs": 2, 00:21:02.451 "num_base_bdevs_discovered": 2, 00:21:02.451 "num_base_bdevs_operational": 2, 00:21:02.451 "process": { 00:21:02.451 "type": "rebuild", 00:21:02.451 "target": "spare", 00:21:02.451 "progress": { 00:21:02.451 "blocks": 14336, 00:21:02.451 "percent": 22 00:21:02.451 } 00:21:02.451 }, 00:21:02.451 "base_bdevs_list": [ 00:21:02.451 { 00:21:02.451 "name": "spare", 00:21:02.451 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:02.451 "is_configured": true, 00:21:02.451 "data_offset": 2048, 00:21:02.451 "data_size": 63488 00:21:02.451 }, 00:21:02.451 { 00:21:02.451 "name": "BaseBdev2", 00:21:02.451 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:02.451 "is_configured": true, 00:21:02.451 "data_offset": 2048, 00:21:02.451 "data_size": 63488 00:21:02.451 } 00:21:02.451 ] 00:21:02.451 }' 00:21:02.451 01:03:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:02.451 [2024-11-18 01:03:36.605489] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:02.451 [2024-11-18 01:03:36.605853] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:02.451 01:03:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.451 01:03:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:02.452 01:03:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.452 01:03:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:02.712 [2024-11-18 01:03:36.866245] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:02.712 [2024-11-18 01:03:36.935205] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:02.712 [2024-11-18 01:03:36.935861] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:02.712 [2024-11-18 01:03:37.043578] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:02.712 [2024-11-18 01:03:37.052831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.712 [2024-11-18 01:03:37.086397] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.971 01:03:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.231 01:03:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:03.231 "name": "raid_bdev1", 00:21:03.231 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:03.231 "strip_size_kb": 0, 00:21:03.231 "state": "online", 00:21:03.231 "raid_level": "raid1", 00:21:03.231 "superblock": true, 00:21:03.231 "num_base_bdevs": 2, 00:21:03.231 "num_base_bdevs_discovered": 1, 00:21:03.231 "num_base_bdevs_operational": 1, 00:21:03.231 "base_bdevs_list": [ 00:21:03.231 { 00:21:03.231 "name": null, 00:21:03.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.231 "is_configured": false, 00:21:03.231 "data_offset": 2048, 00:21:03.231 "data_size": 63488 00:21:03.231 }, 00:21:03.231 { 00:21:03.231 "name": "BaseBdev2", 00:21:03.231 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:03.231 "is_configured": true, 00:21:03.231 "data_offset": 2048, 00:21:03.231 "data_size": 63488 00:21:03.231 } 00:21:03.231 ] 00:21:03.231 }' 00:21:03.231 01:03:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:03.231 01:03:37 -- common/autotest_common.sh@10 -- # set +x 00:21:03.800 01:03:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.800 01:03:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:03.800 01:03:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:03.800 01:03:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:03.800 01:03:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:03.800 01:03:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.800 01:03:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.800 01:03:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.800 "name": "raid_bdev1", 00:21:03.800 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:03.800 "strip_size_kb": 0, 00:21:03.800 "state": "online", 00:21:03.800 "raid_level": "raid1", 00:21:03.800 "superblock": true, 00:21:03.800 "num_base_bdevs": 2, 00:21:03.800 "num_base_bdevs_discovered": 1, 00:21:03.800 "num_base_bdevs_operational": 1, 00:21:03.800 "base_bdevs_list": [ 00:21:03.800 { 00:21:03.800 "name": null, 00:21:03.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.800 "is_configured": false, 00:21:03.800 "data_offset": 2048, 00:21:03.800 "data_size": 63488 00:21:03.800 }, 00:21:03.800 { 00:21:03.800 "name": "BaseBdev2", 00:21:03.800 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:03.800 "is_configured": true, 00:21:03.800 "data_offset": 2048, 00:21:03.800 "data_size": 63488 00:21:03.800 } 00:21:03.800 ] 00:21:03.800 }' 00:21:03.800 01:03:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:04.059 01:03:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:04.059 01:03:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:04.059 01:03:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:04.059 01:03:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.318 [2024-11-18 01:03:38.520163] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:04.318 [2024-11-18 01:03:38.520480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.318 01:03:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:04.318 [2024-11-18 01:03:38.577277] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:04.318 [2024-11-18 01:03:38.579927] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.577 [2024-11-18 01:03:38.855486] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:04.577 [2024-11-18 01:03:38.856138] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:04.836 [2024-11-18 01:03:39.177962] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:04.836 [2024-11-18 01:03:39.178918] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:05.094 [2024-11-18 01:03:39.402486] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:05.353 01:03:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.353 01:03:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.353 01:03:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.353 01:03:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.353 01:03:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.353 01:03:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.353 01:03:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.353 [2024-11-18 01:03:39.739013] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:05.353 [2024-11-18 01:03:39.739849] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.612 "name": "raid_bdev1", 00:21:05.612 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:05.612 "strip_size_kb": 0, 00:21:05.612 "state": "online", 00:21:05.612 "raid_level": "raid1", 00:21:05.612 "superblock": true, 00:21:05.612 "num_base_bdevs": 2, 00:21:05.612 "num_base_bdevs_discovered": 2, 00:21:05.612 "num_base_bdevs_operational": 2, 00:21:05.612 "process": { 00:21:05.612 "type": "rebuild", 00:21:05.612 "target": "spare", 00:21:05.612 "progress": { 00:21:05.612 "blocks": 14336, 00:21:05.612 "percent": 22 00:21:05.612 } 00:21:05.612 }, 00:21:05.612 "base_bdevs_list": [ 00:21:05.612 { 00:21:05.612 "name": "spare", 00:21:05.612 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:05.612 "is_configured": true, 00:21:05.612 "data_offset": 2048, 00:21:05.612 "data_size": 63488 00:21:05.612 }, 00:21:05.612 { 00:21:05.612 "name": "BaseBdev2", 00:21:05.612 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:05.612 "is_configured": true, 00:21:05.612 "data_offset": 2048, 00:21:05.612 "data_size": 63488 00:21:05.612 } 00:21:05.612 ] 00:21:05.612 }' 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.612 [2024-11-18 01:03:39.863215] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:05.612 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@657 -- # local timeout=422 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.612 01:03:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.872 [2024-11-18 01:03:40.088982] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:05.872 01:03:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.872 "name": "raid_bdev1", 00:21:05.872 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:05.872 "strip_size_kb": 0, 00:21:05.872 "state": "online", 00:21:05.872 "raid_level": "raid1", 00:21:05.872 "superblock": true, 00:21:05.872 "num_base_bdevs": 2, 00:21:05.872 "num_base_bdevs_discovered": 2, 00:21:05.872 "num_base_bdevs_operational": 2, 00:21:05.872 "process": { 00:21:05.872 "type": "rebuild", 00:21:05.872 "target": "spare", 00:21:05.872 "progress": { 00:21:05.872 "blocks": 18432, 00:21:05.872 "percent": 29 00:21:05.872 } 00:21:05.872 }, 00:21:05.872 "base_bdevs_list": [ 00:21:05.872 { 00:21:05.872 "name": "spare", 00:21:05.872 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:05.872 "is_configured": true, 00:21:05.872 "data_offset": 2048, 00:21:05.872 "data_size": 63488 00:21:05.872 }, 00:21:05.872 { 00:21:05.872 "name": "BaseBdev2", 00:21:05.872 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:05.872 "is_configured": true, 00:21:05.872 "data_offset": 2048, 00:21:05.872 "data_size": 63488 00:21:05.872 } 00:21:05.872 ] 00:21:05.872 }' 00:21:05.872 01:03:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.872 01:03:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.872 01:03:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.872 01:03:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.872 01:03:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:06.131 [2024-11-18 01:03:40.313377] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:06.390 [2024-11-18 01:03:40.666338] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:06.390 [2024-11-18 01:03:40.781739] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:06.959 [2024-11-18 01:03:41.139450] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.959 01:03:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.219 01:03:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:07.219 "name": "raid_bdev1", 00:21:07.219 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:07.219 "strip_size_kb": 0, 00:21:07.219 "state": "online", 00:21:07.219 "raid_level": "raid1", 00:21:07.219 "superblock": true, 00:21:07.219 "num_base_bdevs": 2, 00:21:07.219 "num_base_bdevs_discovered": 2, 00:21:07.219 "num_base_bdevs_operational": 2, 00:21:07.219 "process": { 00:21:07.219 "type": "rebuild", 00:21:07.219 "target": "spare", 00:21:07.219 "progress": { 00:21:07.219 "blocks": 34816, 00:21:07.219 "percent": 54 00:21:07.219 } 00:21:07.219 }, 00:21:07.219 "base_bdevs_list": [ 00:21:07.219 { 00:21:07.219 "name": "spare", 00:21:07.219 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:07.219 "is_configured": true, 00:21:07.219 "data_offset": 2048, 00:21:07.219 "data_size": 63488 00:21:07.219 }, 00:21:07.219 { 00:21:07.219 "name": "BaseBdev2", 00:21:07.219 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:07.219 "is_configured": true, 00:21:07.219 "data_offset": 2048, 00:21:07.219 "data_size": 63488 00:21:07.219 } 00:21:07.219 ] 00:21:07.219 }' 00:21:07.219 01:03:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.219 01:03:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.219 01:03:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.219 01:03:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.219 01:03:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:07.219 [2024-11-18 01:03:41.471012] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:07.219 [2024-11-18 01:03:41.471829] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:07.478 [2024-11-18 01:03:41.683153] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.416 [2024-11-18 01:03:42.618351] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:08.416 "name": "raid_bdev1", 00:21:08.416 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:08.416 "strip_size_kb": 0, 00:21:08.416 "state": "online", 00:21:08.416 "raid_level": "raid1", 00:21:08.416 "superblock": true, 00:21:08.416 "num_base_bdevs": 2, 00:21:08.416 "num_base_bdevs_discovered": 2, 00:21:08.416 "num_base_bdevs_operational": 2, 00:21:08.416 "process": { 00:21:08.416 "type": "rebuild", 00:21:08.416 "target": "spare", 00:21:08.416 "progress": { 00:21:08.416 "blocks": 59392, 00:21:08.416 "percent": 93 00:21:08.416 } 00:21:08.416 }, 00:21:08.416 "base_bdevs_list": [ 00:21:08.416 { 00:21:08.416 "name": "spare", 00:21:08.416 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:08.416 "is_configured": true, 00:21:08.416 "data_offset": 2048, 00:21:08.416 "data_size": 63488 00:21:08.416 }, 00:21:08.416 { 00:21:08.416 "name": "BaseBdev2", 00:21:08.416 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:08.416 "is_configured": true, 00:21:08.416 "data_offset": 2048, 00:21:08.416 "data_size": 63488 00:21:08.416 } 00:21:08.416 ] 00:21:08.416 }' 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.416 01:03:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:08.675 [2024-11-18 01:03:42.952870] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:08.675 [2024-11-18 01:03:43.052848] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:08.675 [2024-11-18 01:03:43.055793] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.620 01:03:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:09.886 "name": "raid_bdev1", 00:21:09.886 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:09.886 "strip_size_kb": 0, 00:21:09.886 "state": "online", 00:21:09.886 "raid_level": "raid1", 00:21:09.886 "superblock": true, 00:21:09.886 "num_base_bdevs": 2, 00:21:09.886 "num_base_bdevs_discovered": 2, 00:21:09.886 "num_base_bdevs_operational": 2, 00:21:09.886 "base_bdevs_list": [ 00:21:09.886 { 00:21:09.886 "name": "spare", 00:21:09.886 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:09.886 "is_configured": true, 00:21:09.886 "data_offset": 2048, 00:21:09.886 "data_size": 63488 00:21:09.886 }, 00:21:09.886 { 00:21:09.886 "name": "BaseBdev2", 00:21:09.886 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:09.886 "is_configured": true, 00:21:09.886 "data_offset": 2048, 00:21:09.886 "data_size": 63488 00:21:09.886 } 00:21:09.886 ] 00:21:09.886 }' 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@660 -- # break 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.886 01:03:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.145 "name": "raid_bdev1", 00:21:10.145 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:10.145 "strip_size_kb": 0, 00:21:10.145 "state": "online", 00:21:10.145 "raid_level": "raid1", 00:21:10.145 "superblock": true, 00:21:10.145 "num_base_bdevs": 2, 00:21:10.145 "num_base_bdevs_discovered": 2, 00:21:10.145 "num_base_bdevs_operational": 2, 00:21:10.145 "base_bdevs_list": [ 00:21:10.145 { 00:21:10.145 "name": "spare", 00:21:10.145 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:10.145 "is_configured": true, 00:21:10.145 "data_offset": 2048, 00:21:10.145 "data_size": 63488 00:21:10.145 }, 00:21:10.145 { 00:21:10.145 "name": "BaseBdev2", 00:21:10.145 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:10.145 "is_configured": true, 00:21:10.145 "data_offset": 2048, 00:21:10.145 "data_size": 63488 00:21:10.145 } 00:21:10.145 ] 00:21:10.145 }' 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:10.145 01:03:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.146 01:03:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.405 01:03:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.405 "name": "raid_bdev1", 00:21:10.405 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:10.405 "strip_size_kb": 0, 00:21:10.405 "state": "online", 00:21:10.405 "raid_level": "raid1", 00:21:10.405 "superblock": true, 00:21:10.405 "num_base_bdevs": 2, 00:21:10.405 "num_base_bdevs_discovered": 2, 00:21:10.405 "num_base_bdevs_operational": 2, 00:21:10.405 "base_bdevs_list": [ 00:21:10.405 { 00:21:10.405 "name": "spare", 00:21:10.405 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:10.405 "is_configured": true, 00:21:10.405 "data_offset": 2048, 00:21:10.405 "data_size": 63488 00:21:10.405 }, 00:21:10.405 { 00:21:10.405 "name": "BaseBdev2", 00:21:10.405 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:10.405 "is_configured": true, 00:21:10.405 "data_offset": 2048, 00:21:10.405 "data_size": 63488 00:21:10.405 } 00:21:10.405 ] 00:21:10.405 }' 00:21:10.405 01:03:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.405 01:03:44 -- common/autotest_common.sh@10 -- # set +x 00:21:10.975 01:03:45 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:11.234 [2024-11-18 01:03:45.526531] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:11.234 [2024-11-18 01:03:45.526851] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.234 00:21:11.234 Latency(us) 00:21:11.234 [2024-11-18T01:03:45.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.234 [2024-11-18T01:03:45.633Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:11.234 raid_bdev1 : 11.44 118.56 355.67 0.00 0.00 11905.63 306.22 109351.50 00:21:11.234 [2024-11-18T01:03:45.633Z] =================================================================================================================== 00:21:11.235 [2024-11-18T01:03:45.634Z] Total : 118.56 355.67 0.00 0.00 11905.63 306.22 109351.50 00:21:11.235 [2024-11-18 01:03:45.551679] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.235 [2024-11-18 01:03:45.551864] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.235 [2024-11-18 01:03:45.551994] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.235 [2024-11-18 01:03:45.552072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:21:11.235 0 00:21:11.235 01:03:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.235 01:03:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:11.494 01:03:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:11.494 01:03:45 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:11.494 01:03:45 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@12 -- # local i 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.494 01:03:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:11.754 /dev/nbd0 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:11.754 01:03:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:11.754 01:03:46 -- common/autotest_common.sh@867 -- # local i 00:21:11.754 01:03:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:11.754 01:03:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:11.754 01:03:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:11.754 01:03:46 -- common/autotest_common.sh@871 -- # break 00:21:11.754 01:03:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:11.754 01:03:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:11.754 01:03:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:11.754 1+0 records in 00:21:11.754 1+0 records out 00:21:11.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741964 s, 5.5 MB/s 00:21:11.754 01:03:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.754 01:03:46 -- common/autotest_common.sh@884 -- # size=4096 00:21:11.754 01:03:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.754 01:03:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:11.754 01:03:46 -- common/autotest_common.sh@887 -- # return 0 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.754 01:03:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:11.754 01:03:46 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:11.754 01:03:46 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@12 -- # local i 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.754 01:03:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:12.013 /dev/nbd1 00:21:12.013 01:03:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:12.013 01:03:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:12.013 01:03:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:12.013 01:03:46 -- common/autotest_common.sh@867 -- # local i 00:21:12.013 01:03:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:12.013 01:03:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:12.013 01:03:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:12.013 01:03:46 -- common/autotest_common.sh@871 -- # break 00:21:12.013 01:03:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:12.013 01:03:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:12.013 01:03:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:12.013 1+0 records in 00:21:12.013 1+0 records out 00:21:12.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699458 s, 5.9 MB/s 00:21:12.013 01:03:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.013 01:03:46 -- common/autotest_common.sh@884 -- # size=4096 00:21:12.013 01:03:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.013 01:03:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:12.013 01:03:46 -- common/autotest_common.sh@887 -- # return 0 00:21:12.013 01:03:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:12.013 01:03:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.013 01:03:46 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:12.272 01:03:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:12.272 01:03:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.272 01:03:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:12.272 01:03:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:12.272 01:03:46 -- bdev/nbd_common.sh@51 -- # local i 00:21:12.272 01:03:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:12.272 01:03:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@41 -- # break 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.531 01:03:46 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@51 -- # local i 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:12.531 01:03:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@41 -- # break 00:21:12.791 01:03:47 -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.791 01:03:47 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:12.791 01:03:47 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:12.791 01:03:47 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:12.791 01:03:47 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:13.051 01:03:47 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:13.051 [2024-11-18 01:03:47.444980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:13.051 [2024-11-18 01:03:47.445385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.051 [2024-11-18 01:03:47.445461] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:13.051 [2024-11-18 01:03:47.445567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.051 [2024-11-18 01:03:47.448482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.051 [2024-11-18 01:03:47.448699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:13.051 [2024-11-18 01:03:47.448943] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:13.051 [2024-11-18 01:03:47.449076] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.051 BaseBdev1 00:21:13.310 01:03:47 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:13.310 01:03:47 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:13.310 01:03:47 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:13.569 01:03:47 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:13.569 [2024-11-18 01:03:47.893170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:13.569 [2024-11-18 01:03:47.893562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.569 [2024-11-18 01:03:47.893641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:13.569 [2024-11-18 01:03:47.893740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.569 [2024-11-18 01:03:47.894265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.569 [2024-11-18 01:03:47.894325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:13.569 [2024-11-18 01:03:47.894423] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:13.569 [2024-11-18 01:03:47.894438] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:13.569 [2024-11-18 01:03:47.894446] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:13.569 [2024-11-18 01:03:47.894478] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state configuring 00:21:13.569 [2024-11-18 01:03:47.894547] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.569 BaseBdev2 00:21:13.569 01:03:47 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:13.828 01:03:48 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:14.087 [2024-11-18 01:03:48.297272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:14.088 [2024-11-18 01:03:48.297637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.088 [2024-11-18 01:03:48.297739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:14.088 [2024-11-18 01:03:48.297836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.088 [2024-11-18 01:03:48.298395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.088 [2024-11-18 01:03:48.298562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:14.088 [2024-11-18 01:03:48.298755] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:14.088 [2024-11-18 01:03:48.298905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.088 spare 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.088 01:03:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.088 [2024-11-18 01:03:48.399072] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:21:14.088 [2024-11-18 01:03:48.399359] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:14.088 [2024-11-18 01:03:48.399604] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:21:14.088 [2024-11-18 01:03:48.400175] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:21:14.088 [2024-11-18 01:03:48.400279] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:21:14.088 [2024-11-18 01:03:48.400481] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.347 01:03:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.347 "name": "raid_bdev1", 00:21:14.347 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:14.347 "strip_size_kb": 0, 00:21:14.347 "state": "online", 00:21:14.347 "raid_level": "raid1", 00:21:14.347 "superblock": true, 00:21:14.347 "num_base_bdevs": 2, 00:21:14.347 "num_base_bdevs_discovered": 2, 00:21:14.347 "num_base_bdevs_operational": 2, 00:21:14.347 "base_bdevs_list": [ 00:21:14.347 { 00:21:14.347 "name": "spare", 00:21:14.347 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:14.347 "is_configured": true, 00:21:14.347 "data_offset": 2048, 00:21:14.347 "data_size": 63488 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "name": "BaseBdev2", 00:21:14.347 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:14.347 "is_configured": true, 00:21:14.347 "data_offset": 2048, 00:21:14.347 "data_size": 63488 00:21:14.347 } 00:21:14.347 ] 00:21:14.347 }' 00:21:14.347 01:03:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.347 01:03:48 -- common/autotest_common.sh@10 -- # set +x 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.919 "name": "raid_bdev1", 00:21:14.919 "uuid": "6d8a6693-a4d7-4edc-87ce-03bcaa18448e", 00:21:14.919 "strip_size_kb": 0, 00:21:14.919 "state": "online", 00:21:14.919 "raid_level": "raid1", 00:21:14.919 "superblock": true, 00:21:14.919 "num_base_bdevs": 2, 00:21:14.919 "num_base_bdevs_discovered": 2, 00:21:14.919 "num_base_bdevs_operational": 2, 00:21:14.919 "base_bdevs_list": [ 00:21:14.919 { 00:21:14.919 "name": "spare", 00:21:14.919 "uuid": "f2ce0f53-91a7-58e1-a982-ba636ae286de", 00:21:14.919 "is_configured": true, 00:21:14.919 "data_offset": 2048, 00:21:14.919 "data_size": 63488 00:21:14.919 }, 00:21:14.919 { 00:21:14.919 "name": "BaseBdev2", 00:21:14.919 "uuid": "f2bdd791-929b-5bab-976e-b26fd1cb3203", 00:21:14.919 "is_configured": true, 00:21:14.919 "data_offset": 2048, 00:21:14.919 "data_size": 63488 00:21:14.919 } 00:21:14.919 ] 00:21:14.919 }' 00:21:14.919 01:03:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:15.178 01:03:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:15.178 01:03:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:15.178 01:03:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:15.178 01:03:49 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.178 01:03:49 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:15.436 01:03:49 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.436 01:03:49 -- bdev/bdev_raid.sh@709 -- # killprocess 134655 00:21:15.436 01:03:49 -- common/autotest_common.sh@936 -- # '[' -z 134655 ']' 00:21:15.436 01:03:49 -- common/autotest_common.sh@940 -- # kill -0 134655 00:21:15.436 01:03:49 -- common/autotest_common.sh@941 -- # uname 00:21:15.436 01:03:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:15.436 01:03:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134655 00:21:15.436 01:03:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:15.436 01:03:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:15.436 01:03:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134655' 00:21:15.436 killing process with pid 134655 00:21:15.436 01:03:49 -- common/autotest_common.sh@955 -- # kill 134655 00:21:15.436 Received shutdown signal, test time was about 15.567336 seconds 00:21:15.436 00:21:15.436 Latency(us) 00:21:15.436 [2024-11-18T01:03:49.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.436 [2024-11-18T01:03:49.835Z] =================================================================================================================== 00:21:15.436 [2024-11-18T01:03:49.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.436 01:03:49 -- common/autotest_common.sh@960 -- # wait 134655 00:21:15.436 [2024-11-18 01:03:49.676284] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:15.436 [2024-11-18 01:03:49.676512] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.436 [2024-11-18 01:03:49.676691] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.436 [2024-11-18 01:03:49.676773] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:21:15.436 [2024-11-18 01:03:49.723943] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:16.004 00:21:16.004 real 0m20.245s 00:21:16.004 user 0m31.328s 00:21:16.004 sys 0m3.285s 00:21:16.004 01:03:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:16.004 01:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.004 ************************************ 00:21:16.004 END TEST raid_rebuild_test_sb_io 00:21:16.004 ************************************ 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:16.004 01:03:50 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:16.004 01:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:16.004 01:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.004 ************************************ 00:21:16.004 START TEST raid_rebuild_test 00:21:16.004 ************************************ 00:21:16.004 01:03:50 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@544 -- # raid_pid=135210 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135210 /var/tmp/spdk-raid.sock 00:21:16.004 01:03:50 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:16.004 01:03:50 -- common/autotest_common.sh@829 -- # '[' -z 135210 ']' 00:21:16.004 01:03:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:16.004 01:03:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.004 01:03:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:16.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:16.004 01:03:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.004 01:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.004 [2024-11-18 01:03:50.308045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:16.004 [2024-11-18 01:03:50.308606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135210 ] 00:21:16.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:16.004 Zero copy mechanism will not be used. 00:21:16.264 [2024-11-18 01:03:50.462616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.264 [2024-11-18 01:03:50.543390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.264 [2024-11-18 01:03:50.622111] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:17.202 01:03:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.202 01:03:51 -- common/autotest_common.sh@862 -- # return 0 00:21:17.202 01:03:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:17.202 01:03:51 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:17.202 01:03:51 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.202 BaseBdev1 00:21:17.202 01:03:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:17.202 01:03:51 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:17.203 01:03:51 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:17.462 BaseBdev2 00:21:17.462 01:03:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:17.462 01:03:51 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:17.462 01:03:51 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:17.721 BaseBdev3 00:21:17.721 01:03:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:17.721 01:03:52 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:17.721 01:03:52 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:17.980 BaseBdev4 00:21:17.980 01:03:52 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:18.239 spare_malloc 00:21:18.239 01:03:52 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:18.498 spare_delay 00:21:18.498 01:03:52 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:18.498 [2024-11-18 01:03:52.857801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:18.498 [2024-11-18 01:03:52.858186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.498 [2024-11-18 01:03:52.858284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:18.498 [2024-11-18 01:03:52.858416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.498 [2024-11-18 01:03:52.861425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.498 [2024-11-18 01:03:52.861606] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:18.498 spare 00:21:18.498 01:03:52 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:18.757 [2024-11-18 01:03:53.050038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.757 [2024-11-18 01:03:53.052768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.758 [2024-11-18 01:03:53.052951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.758 [2024-11-18 01:03:53.053017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:18.758 [2024-11-18 01:03:53.053225] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:21:18.758 [2024-11-18 01:03:53.053320] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:18.758 [2024-11-18 01:03:53.053545] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:18.758 [2024-11-18 01:03:53.054106] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:21:18.758 [2024-11-18 01:03:53.054218] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:21:18.758 [2024-11-18 01:03:53.054576] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.758 01:03:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.017 01:03:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:19.017 "name": "raid_bdev1", 00:21:19.017 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:19.017 "strip_size_kb": 0, 00:21:19.017 "state": "online", 00:21:19.017 "raid_level": "raid1", 00:21:19.017 "superblock": false, 00:21:19.017 "num_base_bdevs": 4, 00:21:19.017 "num_base_bdevs_discovered": 4, 00:21:19.017 "num_base_bdevs_operational": 4, 00:21:19.017 "base_bdevs_list": [ 00:21:19.017 { 00:21:19.017 "name": "BaseBdev1", 00:21:19.017 "uuid": "4d6ada14-d6c5-41bb-a40e-4f205f17d1ec", 00:21:19.017 "is_configured": true, 00:21:19.017 "data_offset": 0, 00:21:19.017 "data_size": 65536 00:21:19.017 }, 00:21:19.017 { 00:21:19.017 "name": "BaseBdev2", 00:21:19.017 "uuid": "b306093c-bb6f-4081-8453-8ec92c35ff44", 00:21:19.017 "is_configured": true, 00:21:19.017 "data_offset": 0, 00:21:19.017 "data_size": 65536 00:21:19.017 }, 00:21:19.017 { 00:21:19.017 "name": "BaseBdev3", 00:21:19.017 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:19.017 "is_configured": true, 00:21:19.017 "data_offset": 0, 00:21:19.017 "data_size": 65536 00:21:19.017 }, 00:21:19.017 { 00:21:19.017 "name": "BaseBdev4", 00:21:19.017 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:19.017 "is_configured": true, 00:21:19.017 "data_offset": 0, 00:21:19.017 "data_size": 65536 00:21:19.017 } 00:21:19.017 ] 00:21:19.017 }' 00:21:19.017 01:03:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:19.017 01:03:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.585 01:03:53 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:19.585 01:03:53 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:19.843 [2024-11-18 01:03:54.067114] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.843 01:03:54 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:19.843 01:03:54 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.843 01:03:54 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:20.102 01:03:54 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:20.102 01:03:54 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:20.102 01:03:54 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:20.102 01:03:54 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@12 -- # local i 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.102 01:03:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:20.361 [2024-11-18 01:03:54.523078] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:20.361 /dev/nbd0 00:21:20.361 01:03:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:20.361 01:03:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:20.361 01:03:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:20.361 01:03:54 -- common/autotest_common.sh@867 -- # local i 00:21:20.361 01:03:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:20.361 01:03:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:20.361 01:03:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:20.361 01:03:54 -- common/autotest_common.sh@871 -- # break 00:21:20.361 01:03:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:20.361 01:03:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:20.361 01:03:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.361 1+0 records in 00:21:20.361 1+0 records out 00:21:20.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579081 s, 7.1 MB/s 00:21:20.361 01:03:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.361 01:03:54 -- common/autotest_common.sh@884 -- # size=4096 00:21:20.361 01:03:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.361 01:03:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:20.361 01:03:54 -- common/autotest_common.sh@887 -- # return 0 00:21:20.361 01:03:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.361 01:03:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.361 01:03:54 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:20.361 01:03:54 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:20.361 01:03:54 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:25.635 65536+0 records in 00:21:25.635 65536+0 records out 00:21:25.635 33554432 bytes (34 MB, 32 MiB) copied, 4.52585 s, 7.4 MB/s 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@51 -- # local i 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:25.635 [2024-11-18 01:03:59.413958] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@41 -- # break 00:21:25.635 01:03:59 -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:25.635 [2024-11-18 01:03:59.657437] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.635 01:03:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.635 "name": "raid_bdev1", 00:21:25.635 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:25.635 "strip_size_kb": 0, 00:21:25.635 "state": "online", 00:21:25.635 "raid_level": "raid1", 00:21:25.635 "superblock": false, 00:21:25.635 "num_base_bdevs": 4, 00:21:25.635 "num_base_bdevs_discovered": 3, 00:21:25.635 "num_base_bdevs_operational": 3, 00:21:25.635 "base_bdevs_list": [ 00:21:25.635 { 00:21:25.635 "name": null, 00:21:25.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.635 "is_configured": false, 00:21:25.635 "data_offset": 0, 00:21:25.635 "data_size": 65536 00:21:25.635 }, 00:21:25.635 { 00:21:25.635 "name": "BaseBdev2", 00:21:25.635 "uuid": "b306093c-bb6f-4081-8453-8ec92c35ff44", 00:21:25.635 "is_configured": true, 00:21:25.635 "data_offset": 0, 00:21:25.635 "data_size": 65536 00:21:25.635 }, 00:21:25.635 { 00:21:25.635 "name": "BaseBdev3", 00:21:25.636 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:25.636 "is_configured": true, 00:21:25.636 "data_offset": 0, 00:21:25.636 "data_size": 65536 00:21:25.636 }, 00:21:25.636 { 00:21:25.636 "name": "BaseBdev4", 00:21:25.636 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:25.636 "is_configured": true, 00:21:25.636 "data_offset": 0, 00:21:25.636 "data_size": 65536 00:21:25.636 } 00:21:25.636 ] 00:21:25.636 }' 00:21:25.636 01:03:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.636 01:03:59 -- common/autotest_common.sh@10 -- # set +x 00:21:26.203 01:04:00 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:26.461 [2024-11-18 01:04:00.633588] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:26.461 [2024-11-18 01:04:00.633917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.461 [2024-11-18 01:04:00.640205] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:21:26.461 [2024-11-18 01:04:00.642844] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:26.461 01:04:00 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:27.397 01:04:01 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.397 01:04:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.397 01:04:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.397 01:04:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.397 01:04:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.397 01:04:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.397 01:04:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.655 01:04:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.655 "name": "raid_bdev1", 00:21:27.655 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:27.655 "strip_size_kb": 0, 00:21:27.655 "state": "online", 00:21:27.655 "raid_level": "raid1", 00:21:27.655 "superblock": false, 00:21:27.655 "num_base_bdevs": 4, 00:21:27.655 "num_base_bdevs_discovered": 4, 00:21:27.655 "num_base_bdevs_operational": 4, 00:21:27.655 "process": { 00:21:27.655 "type": "rebuild", 00:21:27.655 "target": "spare", 00:21:27.655 "progress": { 00:21:27.655 "blocks": 24576, 00:21:27.655 "percent": 37 00:21:27.655 } 00:21:27.655 }, 00:21:27.655 "base_bdevs_list": [ 00:21:27.655 { 00:21:27.655 "name": "spare", 00:21:27.655 "uuid": "3c022ed2-d636-5ea9-94d6-475fa4c3937b", 00:21:27.655 "is_configured": true, 00:21:27.655 "data_offset": 0, 00:21:27.655 "data_size": 65536 00:21:27.655 }, 00:21:27.655 { 00:21:27.655 "name": "BaseBdev2", 00:21:27.655 "uuid": "b306093c-bb6f-4081-8453-8ec92c35ff44", 00:21:27.655 "is_configured": true, 00:21:27.655 "data_offset": 0, 00:21:27.655 "data_size": 65536 00:21:27.655 }, 00:21:27.655 { 00:21:27.655 "name": "BaseBdev3", 00:21:27.655 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:27.655 "is_configured": true, 00:21:27.655 "data_offset": 0, 00:21:27.655 "data_size": 65536 00:21:27.655 }, 00:21:27.655 { 00:21:27.655 "name": "BaseBdev4", 00:21:27.655 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:27.655 "is_configured": true, 00:21:27.655 "data_offset": 0, 00:21:27.655 "data_size": 65536 00:21:27.655 } 00:21:27.655 ] 00:21:27.655 }' 00:21:27.655 01:04:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.655 01:04:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.655 01:04:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.655 01:04:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.655 01:04:02 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:27.914 [2024-11-18 01:04:02.228747] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.914 [2024-11-18 01:04:02.255670] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.915 [2024-11-18 01:04:02.255978] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.915 01:04:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.173 01:04:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.173 "name": "raid_bdev1", 00:21:28.173 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:28.173 "strip_size_kb": 0, 00:21:28.173 "state": "online", 00:21:28.173 "raid_level": "raid1", 00:21:28.173 "superblock": false, 00:21:28.173 "num_base_bdevs": 4, 00:21:28.173 "num_base_bdevs_discovered": 3, 00:21:28.173 "num_base_bdevs_operational": 3, 00:21:28.173 "base_bdevs_list": [ 00:21:28.173 { 00:21:28.173 "name": null, 00:21:28.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.173 "is_configured": false, 00:21:28.173 "data_offset": 0, 00:21:28.173 "data_size": 65536 00:21:28.173 }, 00:21:28.173 { 00:21:28.173 "name": "BaseBdev2", 00:21:28.173 "uuid": "b306093c-bb6f-4081-8453-8ec92c35ff44", 00:21:28.173 "is_configured": true, 00:21:28.173 "data_offset": 0, 00:21:28.173 "data_size": 65536 00:21:28.173 }, 00:21:28.173 { 00:21:28.173 "name": "BaseBdev3", 00:21:28.173 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:28.173 "is_configured": true, 00:21:28.173 "data_offset": 0, 00:21:28.173 "data_size": 65536 00:21:28.173 }, 00:21:28.173 { 00:21:28.173 "name": "BaseBdev4", 00:21:28.173 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:28.173 "is_configured": true, 00:21:28.173 "data_offset": 0, 00:21:28.173 "data_size": 65536 00:21:28.173 } 00:21:28.173 ] 00:21:28.174 }' 00:21:28.174 01:04:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.174 01:04:02 -- common/autotest_common.sh@10 -- # set +x 00:21:28.772 01:04:03 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.772 01:04:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.772 01:04:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:28.772 01:04:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:28.772 01:04:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.772 01:04:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.772 01:04:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.032 01:04:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.032 "name": "raid_bdev1", 00:21:29.032 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:29.032 "strip_size_kb": 0, 00:21:29.032 "state": "online", 00:21:29.032 "raid_level": "raid1", 00:21:29.032 "superblock": false, 00:21:29.032 "num_base_bdevs": 4, 00:21:29.032 "num_base_bdevs_discovered": 3, 00:21:29.032 "num_base_bdevs_operational": 3, 00:21:29.032 "base_bdevs_list": [ 00:21:29.032 { 00:21:29.032 "name": null, 00:21:29.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.032 "is_configured": false, 00:21:29.032 "data_offset": 0, 00:21:29.032 "data_size": 65536 00:21:29.032 }, 00:21:29.032 { 00:21:29.032 "name": "BaseBdev2", 00:21:29.032 "uuid": "b306093c-bb6f-4081-8453-8ec92c35ff44", 00:21:29.032 "is_configured": true, 00:21:29.032 "data_offset": 0, 00:21:29.032 "data_size": 65536 00:21:29.032 }, 00:21:29.032 { 00:21:29.032 "name": "BaseBdev3", 00:21:29.032 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:29.032 "is_configured": true, 00:21:29.032 "data_offset": 0, 00:21:29.032 "data_size": 65536 00:21:29.032 }, 00:21:29.032 { 00:21:29.032 "name": "BaseBdev4", 00:21:29.032 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:29.032 "is_configured": true, 00:21:29.032 "data_offset": 0, 00:21:29.032 "data_size": 65536 00:21:29.032 } 00:21:29.032 ] 00:21:29.032 }' 00:21:29.032 01:04:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.032 01:04:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:29.032 01:04:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.291 01:04:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:29.291 01:04:03 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:29.549 [2024-11-18 01:04:03.715542] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:29.549 [2024-11-18 01:04:03.715798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.549 [2024-11-18 01:04:03.721948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:21:29.549 [2024-11-18 01:04:03.724504] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:29.549 01:04:03 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:30.487 01:04:04 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.487 01:04:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.487 01:04:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:30.487 01:04:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:30.487 01:04:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.487 01:04:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.487 01:04:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.746 01:04:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.746 "name": "raid_bdev1", 00:21:30.746 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:30.746 "strip_size_kb": 0, 00:21:30.746 "state": "online", 00:21:30.746 "raid_level": "raid1", 00:21:30.746 "superblock": false, 00:21:30.746 "num_base_bdevs": 4, 00:21:30.746 "num_base_bdevs_discovered": 4, 00:21:30.746 "num_base_bdevs_operational": 4, 00:21:30.746 "process": { 00:21:30.746 "type": "rebuild", 00:21:30.746 "target": "spare", 00:21:30.746 "progress": { 00:21:30.746 "blocks": 24576, 00:21:30.746 "percent": 37 00:21:30.746 } 00:21:30.746 }, 00:21:30.746 "base_bdevs_list": [ 00:21:30.746 { 00:21:30.746 "name": "spare", 00:21:30.746 "uuid": "3c022ed2-d636-5ea9-94d6-475fa4c3937b", 00:21:30.746 "is_configured": true, 00:21:30.746 "data_offset": 0, 00:21:30.746 "data_size": 65536 00:21:30.746 }, 00:21:30.746 { 00:21:30.746 "name": "BaseBdev2", 00:21:30.746 "uuid": "b306093c-bb6f-4081-8453-8ec92c35ff44", 00:21:30.746 "is_configured": true, 00:21:30.746 "data_offset": 0, 00:21:30.746 "data_size": 65536 00:21:30.746 }, 00:21:30.746 { 00:21:30.746 "name": "BaseBdev3", 00:21:30.746 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:30.746 "is_configured": true, 00:21:30.746 "data_offset": 0, 00:21:30.746 "data_size": 65536 00:21:30.746 }, 00:21:30.746 { 00:21:30.746 "name": "BaseBdev4", 00:21:30.746 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:30.746 "is_configured": true, 00:21:30.746 "data_offset": 0, 00:21:30.746 "data_size": 65536 00:21:30.746 } 00:21:30.746 ] 00:21:30.746 }' 00:21:30.746 01:04:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:30.746 01:04:05 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:31.005 [2024-11-18 01:04:05.262362] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.006 [2024-11-18 01:04:05.336307] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06220 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.006 01:04:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.265 01:04:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.265 "name": "raid_bdev1", 00:21:31.265 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:31.265 "strip_size_kb": 0, 00:21:31.265 "state": "online", 00:21:31.265 "raid_level": "raid1", 00:21:31.265 "superblock": false, 00:21:31.265 "num_base_bdevs": 4, 00:21:31.265 "num_base_bdevs_discovered": 3, 00:21:31.265 "num_base_bdevs_operational": 3, 00:21:31.265 "process": { 00:21:31.265 "type": "rebuild", 00:21:31.265 "target": "spare", 00:21:31.265 "progress": { 00:21:31.265 "blocks": 36864, 00:21:31.265 "percent": 56 00:21:31.265 } 00:21:31.265 }, 00:21:31.265 "base_bdevs_list": [ 00:21:31.265 { 00:21:31.265 "name": "spare", 00:21:31.265 "uuid": "3c022ed2-d636-5ea9-94d6-475fa4c3937b", 00:21:31.265 "is_configured": true, 00:21:31.265 "data_offset": 0, 00:21:31.265 "data_size": 65536 00:21:31.265 }, 00:21:31.265 { 00:21:31.265 "name": null, 00:21:31.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.265 "is_configured": false, 00:21:31.265 "data_offset": 0, 00:21:31.265 "data_size": 65536 00:21:31.265 }, 00:21:31.265 { 00:21:31.265 "name": "BaseBdev3", 00:21:31.265 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:31.265 "is_configured": true, 00:21:31.265 "data_offset": 0, 00:21:31.265 "data_size": 65536 00:21:31.265 }, 00:21:31.265 { 00:21:31.265 "name": "BaseBdev4", 00:21:31.265 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:31.265 "is_configured": true, 00:21:31.265 "data_offset": 0, 00:21:31.265 "data_size": 65536 00:21:31.265 } 00:21:31.265 ] 00:21:31.265 }' 00:21:31.265 01:04:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.265 01:04:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.265 01:04:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@657 -- # local timeout=448 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.524 01:04:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.784 01:04:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.784 "name": "raid_bdev1", 00:21:31.784 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:31.784 "strip_size_kb": 0, 00:21:31.784 "state": "online", 00:21:31.784 "raid_level": "raid1", 00:21:31.784 "superblock": false, 00:21:31.784 "num_base_bdevs": 4, 00:21:31.784 "num_base_bdevs_discovered": 3, 00:21:31.784 "num_base_bdevs_operational": 3, 00:21:31.784 "process": { 00:21:31.784 "type": "rebuild", 00:21:31.784 "target": "spare", 00:21:31.784 "progress": { 00:21:31.784 "blocks": 43008, 00:21:31.784 "percent": 65 00:21:31.784 } 00:21:31.784 }, 00:21:31.784 "base_bdevs_list": [ 00:21:31.784 { 00:21:31.784 "name": "spare", 00:21:31.784 "uuid": "3c022ed2-d636-5ea9-94d6-475fa4c3937b", 00:21:31.784 "is_configured": true, 00:21:31.784 "data_offset": 0, 00:21:31.784 "data_size": 65536 00:21:31.784 }, 00:21:31.784 { 00:21:31.784 "name": null, 00:21:31.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.784 "is_configured": false, 00:21:31.784 "data_offset": 0, 00:21:31.784 "data_size": 65536 00:21:31.784 }, 00:21:31.784 { 00:21:31.784 "name": "BaseBdev3", 00:21:31.784 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:31.784 "is_configured": true, 00:21:31.784 "data_offset": 0, 00:21:31.784 "data_size": 65536 00:21:31.784 }, 00:21:31.784 { 00:21:31.784 "name": "BaseBdev4", 00:21:31.784 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:31.784 "is_configured": true, 00:21:31.784 "data_offset": 0, 00:21:31.784 "data_size": 65536 00:21:31.784 } 00:21:31.784 ] 00:21:31.784 }' 00:21:31.784 01:04:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.784 01:04:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.784 01:04:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.784 01:04:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.784 01:04:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:32.722 [2024-11-18 01:04:06.948432] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:32.722 [2024-11-18 01:04:06.948830] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:32.722 [2024-11-18 01:04:06.949006] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.722 01:04:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.982 01:04:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.982 "name": "raid_bdev1", 00:21:32.982 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:32.982 "strip_size_kb": 0, 00:21:32.982 "state": "online", 00:21:32.982 "raid_level": "raid1", 00:21:32.982 "superblock": false, 00:21:32.982 "num_base_bdevs": 4, 00:21:32.982 "num_base_bdevs_discovered": 3, 00:21:32.982 "num_base_bdevs_operational": 3, 00:21:32.982 "base_bdevs_list": [ 00:21:32.982 { 00:21:32.982 "name": "spare", 00:21:32.982 "uuid": "3c022ed2-d636-5ea9-94d6-475fa4c3937b", 00:21:32.982 "is_configured": true, 00:21:32.982 "data_offset": 0, 00:21:32.982 "data_size": 65536 00:21:32.982 }, 00:21:32.982 { 00:21:32.982 "name": null, 00:21:32.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.982 "is_configured": false, 00:21:32.982 "data_offset": 0, 00:21:32.982 "data_size": 65536 00:21:32.982 }, 00:21:32.982 { 00:21:32.982 "name": "BaseBdev3", 00:21:32.982 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:32.982 "is_configured": true, 00:21:32.982 "data_offset": 0, 00:21:32.982 "data_size": 65536 00:21:32.982 }, 00:21:32.982 { 00:21:32.982 "name": "BaseBdev4", 00:21:32.982 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:32.982 "is_configured": true, 00:21:32.982 "data_offset": 0, 00:21:32.982 "data_size": 65536 00:21:32.982 } 00:21:32.982 ] 00:21:32.982 }' 00:21:32.982 01:04:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.982 01:04:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:32.982 01:04:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@660 -- # break 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.241 01:04:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.501 "name": "raid_bdev1", 00:21:33.501 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:33.501 "strip_size_kb": 0, 00:21:33.501 "state": "online", 00:21:33.501 "raid_level": "raid1", 00:21:33.501 "superblock": false, 00:21:33.501 "num_base_bdevs": 4, 00:21:33.501 "num_base_bdevs_discovered": 3, 00:21:33.501 "num_base_bdevs_operational": 3, 00:21:33.501 "base_bdevs_list": [ 00:21:33.501 { 00:21:33.501 "name": "spare", 00:21:33.501 "uuid": "3c022ed2-d636-5ea9-94d6-475fa4c3937b", 00:21:33.501 "is_configured": true, 00:21:33.501 "data_offset": 0, 00:21:33.501 "data_size": 65536 00:21:33.501 }, 00:21:33.501 { 00:21:33.501 "name": null, 00:21:33.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.501 "is_configured": false, 00:21:33.501 "data_offset": 0, 00:21:33.501 "data_size": 65536 00:21:33.501 }, 00:21:33.501 { 00:21:33.501 "name": "BaseBdev3", 00:21:33.501 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:33.501 "is_configured": true, 00:21:33.501 "data_offset": 0, 00:21:33.501 "data_size": 65536 00:21:33.501 }, 00:21:33.501 { 00:21:33.501 "name": "BaseBdev4", 00:21:33.501 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:33.501 "is_configured": true, 00:21:33.501 "data_offset": 0, 00:21:33.501 "data_size": 65536 00:21:33.501 } 00:21:33.501 ] 00:21:33.501 }' 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.501 01:04:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.761 01:04:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:33.761 "name": "raid_bdev1", 00:21:33.761 "uuid": "8694bcdd-4c1d-4e2a-a4df-ef360f318809", 00:21:33.761 "strip_size_kb": 0, 00:21:33.761 "state": "online", 00:21:33.761 "raid_level": "raid1", 00:21:33.761 "superblock": false, 00:21:33.761 "num_base_bdevs": 4, 00:21:33.761 "num_base_bdevs_discovered": 3, 00:21:33.761 "num_base_bdevs_operational": 3, 00:21:33.761 "base_bdevs_list": [ 00:21:33.761 { 00:21:33.761 "name": "spare", 00:21:33.761 "uuid": "3c022ed2-d636-5ea9-94d6-475fa4c3937b", 00:21:33.761 "is_configured": true, 00:21:33.761 "data_offset": 0, 00:21:33.761 "data_size": 65536 00:21:33.761 }, 00:21:33.761 { 00:21:33.761 "name": null, 00:21:33.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.761 "is_configured": false, 00:21:33.761 "data_offset": 0, 00:21:33.761 "data_size": 65536 00:21:33.761 }, 00:21:33.761 { 00:21:33.761 "name": "BaseBdev3", 00:21:33.761 "uuid": "a6b613e7-e101-4da9-96ff-f985ed32cb46", 00:21:33.761 "is_configured": true, 00:21:33.761 "data_offset": 0, 00:21:33.761 "data_size": 65536 00:21:33.761 }, 00:21:33.761 { 00:21:33.761 "name": "BaseBdev4", 00:21:33.761 "uuid": "4659f87e-2931-4484-9d3e-3bc1265250e8", 00:21:33.761 "is_configured": true, 00:21:33.761 "data_offset": 0, 00:21:33.761 "data_size": 65536 00:21:33.761 } 00:21:33.761 ] 00:21:33.761 }' 00:21:33.761 01:04:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:33.761 01:04:08 -- common/autotest_common.sh@10 -- # set +x 00:21:34.329 01:04:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:34.588 [2024-11-18 01:04:08.820381] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.588 [2024-11-18 01:04:08.820667] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.588 [2024-11-18 01:04:08.820936] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.588 [2024-11-18 01:04:08.821142] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.588 [2024-11-18 01:04:08.821224] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:21:34.588 01:04:08 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.588 01:04:08 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:34.847 01:04:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:34.847 01:04:09 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:34.847 01:04:09 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@12 -- # local i 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:34.847 01:04:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:35.105 /dev/nbd0 00:21:35.105 01:04:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:35.105 01:04:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:35.105 01:04:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:35.105 01:04:09 -- common/autotest_common.sh@867 -- # local i 00:21:35.105 01:04:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:35.105 01:04:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:35.105 01:04:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:35.105 01:04:09 -- common/autotest_common.sh@871 -- # break 00:21:35.105 01:04:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:35.105 01:04:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:35.105 01:04:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:35.105 1+0 records in 00:21:35.105 1+0 records out 00:21:35.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290259 s, 14.1 MB/s 00:21:35.105 01:04:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.105 01:04:09 -- common/autotest_common.sh@884 -- # size=4096 00:21:35.105 01:04:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.105 01:04:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:35.105 01:04:09 -- common/autotest_common.sh@887 -- # return 0 00:21:35.105 01:04:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:35.105 01:04:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:35.105 01:04:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:35.363 /dev/nbd1 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:35.363 01:04:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:35.363 01:04:09 -- common/autotest_common.sh@867 -- # local i 00:21:35.363 01:04:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:35.363 01:04:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:35.363 01:04:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:35.363 01:04:09 -- common/autotest_common.sh@871 -- # break 00:21:35.363 01:04:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:35.363 01:04:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:35.363 01:04:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:35.363 1+0 records in 00:21:35.363 1+0 records out 00:21:35.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795626 s, 5.1 MB/s 00:21:35.363 01:04:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.363 01:04:09 -- common/autotest_common.sh@884 -- # size=4096 00:21:35.363 01:04:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.363 01:04:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:35.363 01:04:09 -- common/autotest_common.sh@887 -- # return 0 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:35.363 01:04:09 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:35.363 01:04:09 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@51 -- # local i 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:35.363 01:04:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@41 -- # break 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@45 -- # return 0 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:35.621 01:04:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@41 -- # break 00:21:35.880 01:04:10 -- bdev/nbd_common.sh@45 -- # return 0 00:21:35.880 01:04:10 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:35.880 01:04:10 -- bdev/bdev_raid.sh@709 -- # killprocess 135210 00:21:35.880 01:04:10 -- common/autotest_common.sh@936 -- # '[' -z 135210 ']' 00:21:35.880 01:04:10 -- common/autotest_common.sh@940 -- # kill -0 135210 00:21:35.880 01:04:10 -- common/autotest_common.sh@941 -- # uname 00:21:35.880 01:04:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.880 01:04:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135210 00:21:35.880 01:04:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:35.880 01:04:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:35.880 01:04:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135210' 00:21:35.880 killing process with pid 135210 00:21:35.880 01:04:10 -- common/autotest_common.sh@955 -- # kill 135210 00:21:35.880 Received shutdown signal, test time was about 60.000000 seconds 00:21:35.880 00:21:35.881 Latency(us) 00:21:35.881 [2024-11-18T01:04:10.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.881 [2024-11-18T01:04:10.280Z] =================================================================================================================== 00:21:35.881 [2024-11-18T01:04:10.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.881 01:04:10 -- common/autotest_common.sh@960 -- # wait 135210 00:21:35.881 [2024-11-18 01:04:10.259070] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:36.140 [2024-11-18 01:04:10.353646] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:36.399 01:04:10 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:36.399 00:21:36.399 real 0m20.543s 00:21:36.399 user 0m28.158s 00:21:36.399 sys 0m4.522s 00:21:36.399 01:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:36.399 01:04:10 -- common/autotest_common.sh@10 -- # set +x 00:21:36.399 ************************************ 00:21:36.399 END TEST raid_rebuild_test 00:21:36.399 ************************************ 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:36.658 01:04:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:36.658 01:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.658 01:04:10 -- common/autotest_common.sh@10 -- # set +x 00:21:36.658 ************************************ 00:21:36.658 START TEST raid_rebuild_test_sb 00:21:36.658 ************************************ 00:21:36.658 01:04:10 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@544 -- # raid_pid=135741 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135741 /var/tmp/spdk-raid.sock 00:21:36.658 01:04:10 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:36.658 01:04:10 -- common/autotest_common.sh@829 -- # '[' -z 135741 ']' 00:21:36.658 01:04:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:36.658 01:04:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.658 01:04:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:36.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:36.658 01:04:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.658 01:04:10 -- common/autotest_common.sh@10 -- # set +x 00:21:36.658 [2024-11-18 01:04:10.911492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:36.658 [2024-11-18 01:04:10.911864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135741 ] 00:21:36.658 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:36.658 Zero copy mechanism will not be used. 00:21:36.658 [2024-11-18 01:04:11.054435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.918 [2024-11-18 01:04:11.133633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.918 [2024-11-18 01:04:11.212049] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.487 01:04:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.487 01:04:11 -- common/autotest_common.sh@862 -- # return 0 00:21:37.487 01:04:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:37.487 01:04:11 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:37.487 01:04:11 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:37.746 BaseBdev1_malloc 00:21:37.746 01:04:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:38.006 [2024-11-18 01:04:12.249089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:38.006 [2024-11-18 01:04:12.249395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.006 [2024-11-18 01:04:12.249487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:38.006 [2024-11-18 01:04:12.249603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.006 [2024-11-18 01:04:12.252571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.006 [2024-11-18 01:04:12.252741] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:38.006 BaseBdev1 00:21:38.006 01:04:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:38.006 01:04:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:38.006 01:04:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:38.265 BaseBdev2_malloc 00:21:38.265 01:04:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:38.265 [2024-11-18 01:04:12.648949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:38.265 [2024-11-18 01:04:12.649326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.265 [2024-11-18 01:04:12.649406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:38.265 [2024-11-18 01:04:12.649538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.265 [2024-11-18 01:04:12.652265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.265 [2024-11-18 01:04:12.652427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:38.265 BaseBdev2 00:21:38.524 01:04:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:38.524 01:04:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:38.524 01:04:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:38.524 BaseBdev3_malloc 00:21:38.524 01:04:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:38.784 [2024-11-18 01:04:13.066008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:38.784 [2024-11-18 01:04:13.066419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.784 [2024-11-18 01:04:13.066511] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:38.784 [2024-11-18 01:04:13.066656] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.784 [2024-11-18 01:04:13.069563] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.784 [2024-11-18 01:04:13.069746] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:38.784 BaseBdev3 00:21:38.784 01:04:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:38.784 01:04:13 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:38.784 01:04:13 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:39.043 BaseBdev4_malloc 00:21:39.043 01:04:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:39.302 [2024-11-18 01:04:13.465809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:39.302 [2024-11-18 01:04:13.466180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.302 [2024-11-18 01:04:13.466263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:39.302 [2024-11-18 01:04:13.466395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.302 [2024-11-18 01:04:13.469247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.302 [2024-11-18 01:04:13.469429] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:39.302 BaseBdev4 00:21:39.302 01:04:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:39.561 spare_malloc 00:21:39.561 01:04:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:39.561 spare_delay 00:21:39.561 01:04:13 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:39.820 [2024-11-18 01:04:14.117683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:39.820 [2024-11-18 01:04:14.118071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.820 [2024-11-18 01:04:14.118162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:39.820 [2024-11-18 01:04:14.118294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.820 [2024-11-18 01:04:14.121207] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.820 [2024-11-18 01:04:14.121377] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:39.820 spare 00:21:39.820 01:04:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:40.080 [2024-11-18 01:04:14.309838] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.080 [2024-11-18 01:04:14.312594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.080 [2024-11-18 01:04:14.312797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.080 [2024-11-18 01:04:14.312875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:40.080 [2024-11-18 01:04:14.313197] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:40.080 [2024-11-18 01:04:14.313294] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:40.080 [2024-11-18 01:04:14.313514] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:40.080 [2024-11-18 01:04:14.313998] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:40.080 [2024-11-18 01:04:14.314097] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:40.080 [2024-11-18 01:04:14.314390] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.080 01:04:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.339 01:04:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.339 "name": "raid_bdev1", 00:21:40.339 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:40.339 "strip_size_kb": 0, 00:21:40.339 "state": "online", 00:21:40.339 "raid_level": "raid1", 00:21:40.339 "superblock": true, 00:21:40.339 "num_base_bdevs": 4, 00:21:40.339 "num_base_bdevs_discovered": 4, 00:21:40.339 "num_base_bdevs_operational": 4, 00:21:40.339 "base_bdevs_list": [ 00:21:40.339 { 00:21:40.339 "name": "BaseBdev1", 00:21:40.339 "uuid": "bb7cbe83-6e3f-57b6-89e5-281e911666d5", 00:21:40.339 "is_configured": true, 00:21:40.339 "data_offset": 2048, 00:21:40.339 "data_size": 63488 00:21:40.339 }, 00:21:40.339 { 00:21:40.339 "name": "BaseBdev2", 00:21:40.339 "uuid": "aad847f1-5b3c-5753-9063-7a4a8923cc88", 00:21:40.339 "is_configured": true, 00:21:40.339 "data_offset": 2048, 00:21:40.339 "data_size": 63488 00:21:40.339 }, 00:21:40.339 { 00:21:40.339 "name": "BaseBdev3", 00:21:40.339 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:40.339 "is_configured": true, 00:21:40.339 "data_offset": 2048, 00:21:40.339 "data_size": 63488 00:21:40.339 }, 00:21:40.339 { 00:21:40.339 "name": "BaseBdev4", 00:21:40.339 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:40.339 "is_configured": true, 00:21:40.339 "data_offset": 2048, 00:21:40.339 "data_size": 63488 00:21:40.339 } 00:21:40.339 ] 00:21:40.339 }' 00:21:40.339 01:04:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.339 01:04:14 -- common/autotest_common.sh@10 -- # set +x 00:21:40.907 01:04:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:40.907 01:04:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:41.166 [2024-11-18 01:04:15.418806] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.166 01:04:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:41.166 01:04:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:41.166 01:04:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.425 01:04:15 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:41.425 01:04:15 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:41.425 01:04:15 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:41.425 01:04:15 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@12 -- # local i 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.425 01:04:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:41.685 [2024-11-18 01:04:15.858716] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:21:41.685 /dev/nbd0 00:21:41.685 01:04:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.685 01:04:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.685 01:04:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:41.685 01:04:15 -- common/autotest_common.sh@867 -- # local i 00:21:41.685 01:04:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:41.685 01:04:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:41.685 01:04:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:41.685 01:04:15 -- common/autotest_common.sh@871 -- # break 00:21:41.685 01:04:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:41.685 01:04:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:41.685 01:04:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.685 1+0 records in 00:21:41.685 1+0 records out 00:21:41.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419032 s, 9.8 MB/s 00:21:41.685 01:04:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.685 01:04:15 -- common/autotest_common.sh@884 -- # size=4096 00:21:41.685 01:04:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.685 01:04:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:41.685 01:04:15 -- common/autotest_common.sh@887 -- # return 0 00:21:41.685 01:04:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.685 01:04:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.685 01:04:15 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:41.685 01:04:15 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:41.685 01:04:15 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:46.971 63488+0 records in 00:21:46.971 63488+0 records out 00:21:46.971 32505856 bytes (33 MB, 31 MiB) copied, 5.01286 s, 6.5 MB/s 00:21:46.971 01:04:20 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:46.971 01:04:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:46.971 01:04:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:46.971 01:04:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:46.971 01:04:20 -- bdev/nbd_common.sh@51 -- # local i 00:21:46.971 01:04:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:46.971 01:04:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:46.971 [2024-11-18 01:04:21.182673] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@41 -- # break 00:21:46.971 01:04:21 -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.971 01:04:21 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:47.230 [2024-11-18 01:04:21.378261] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.230 01:04:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.490 01:04:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.490 "name": "raid_bdev1", 00:21:47.490 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:47.490 "strip_size_kb": 0, 00:21:47.490 "state": "online", 00:21:47.490 "raid_level": "raid1", 00:21:47.490 "superblock": true, 00:21:47.490 "num_base_bdevs": 4, 00:21:47.490 "num_base_bdevs_discovered": 3, 00:21:47.490 "num_base_bdevs_operational": 3, 00:21:47.490 "base_bdevs_list": [ 00:21:47.490 { 00:21:47.490 "name": null, 00:21:47.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.490 "is_configured": false, 00:21:47.490 "data_offset": 2048, 00:21:47.490 "data_size": 63488 00:21:47.490 }, 00:21:47.490 { 00:21:47.490 "name": "BaseBdev2", 00:21:47.490 "uuid": "aad847f1-5b3c-5753-9063-7a4a8923cc88", 00:21:47.490 "is_configured": true, 00:21:47.490 "data_offset": 2048, 00:21:47.490 "data_size": 63488 00:21:47.490 }, 00:21:47.490 { 00:21:47.490 "name": "BaseBdev3", 00:21:47.490 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:47.490 "is_configured": true, 00:21:47.490 "data_offset": 2048, 00:21:47.490 "data_size": 63488 00:21:47.490 }, 00:21:47.490 { 00:21:47.490 "name": "BaseBdev4", 00:21:47.490 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:47.490 "is_configured": true, 00:21:47.490 "data_offset": 2048, 00:21:47.490 "data_size": 63488 00:21:47.490 } 00:21:47.490 ] 00:21:47.490 }' 00:21:47.490 01:04:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.490 01:04:21 -- common/autotest_common.sh@10 -- # set +x 00:21:48.055 01:04:22 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:48.313 [2024-11-18 01:04:22.490884] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:48.313 [2024-11-18 01:04:22.491210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.313 [2024-11-18 01:04:22.497410] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:21:48.313 [2024-11-18 01:04:22.500112] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:48.313 01:04:22 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:49.248 01:04:23 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.248 01:04:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.248 01:04:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:49.248 01:04:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:49.249 01:04:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.249 01:04:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.249 01:04:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.507 01:04:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.507 "name": "raid_bdev1", 00:21:49.507 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:49.507 "strip_size_kb": 0, 00:21:49.507 "state": "online", 00:21:49.507 "raid_level": "raid1", 00:21:49.507 "superblock": true, 00:21:49.507 "num_base_bdevs": 4, 00:21:49.507 "num_base_bdevs_discovered": 4, 00:21:49.507 "num_base_bdevs_operational": 4, 00:21:49.507 "process": { 00:21:49.507 "type": "rebuild", 00:21:49.507 "target": "spare", 00:21:49.507 "progress": { 00:21:49.507 "blocks": 24576, 00:21:49.507 "percent": 38 00:21:49.507 } 00:21:49.507 }, 00:21:49.507 "base_bdevs_list": [ 00:21:49.507 { 00:21:49.507 "name": "spare", 00:21:49.507 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:49.507 "is_configured": true, 00:21:49.507 "data_offset": 2048, 00:21:49.507 "data_size": 63488 00:21:49.507 }, 00:21:49.507 { 00:21:49.507 "name": "BaseBdev2", 00:21:49.508 "uuid": "aad847f1-5b3c-5753-9063-7a4a8923cc88", 00:21:49.508 "is_configured": true, 00:21:49.508 "data_offset": 2048, 00:21:49.508 "data_size": 63488 00:21:49.508 }, 00:21:49.508 { 00:21:49.508 "name": "BaseBdev3", 00:21:49.508 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:49.508 "is_configured": true, 00:21:49.508 "data_offset": 2048, 00:21:49.508 "data_size": 63488 00:21:49.508 }, 00:21:49.508 { 00:21:49.508 "name": "BaseBdev4", 00:21:49.508 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:49.508 "is_configured": true, 00:21:49.508 "data_offset": 2048, 00:21:49.508 "data_size": 63488 00:21:49.508 } 00:21:49.508 ] 00:21:49.508 }' 00:21:49.508 01:04:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.508 01:04:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.508 01:04:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.508 01:04:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.508 01:04:23 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:49.767 [2024-11-18 01:04:24.042178] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:49.767 [2024-11-18 01:04:24.113012] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:49.767 [2024-11-18 01:04:24.113303] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.767 01:04:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.335 01:04:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.335 "name": "raid_bdev1", 00:21:50.335 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:50.335 "strip_size_kb": 0, 00:21:50.335 "state": "online", 00:21:50.335 "raid_level": "raid1", 00:21:50.335 "superblock": true, 00:21:50.335 "num_base_bdevs": 4, 00:21:50.335 "num_base_bdevs_discovered": 3, 00:21:50.335 "num_base_bdevs_operational": 3, 00:21:50.335 "base_bdevs_list": [ 00:21:50.335 { 00:21:50.335 "name": null, 00:21:50.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.335 "is_configured": false, 00:21:50.335 "data_offset": 2048, 00:21:50.335 "data_size": 63488 00:21:50.335 }, 00:21:50.335 { 00:21:50.335 "name": "BaseBdev2", 00:21:50.335 "uuid": "aad847f1-5b3c-5753-9063-7a4a8923cc88", 00:21:50.335 "is_configured": true, 00:21:50.335 "data_offset": 2048, 00:21:50.335 "data_size": 63488 00:21:50.335 }, 00:21:50.335 { 00:21:50.335 "name": "BaseBdev3", 00:21:50.335 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:50.335 "is_configured": true, 00:21:50.335 "data_offset": 2048, 00:21:50.335 "data_size": 63488 00:21:50.335 }, 00:21:50.335 { 00:21:50.335 "name": "BaseBdev4", 00:21:50.335 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:50.335 "is_configured": true, 00:21:50.335 "data_offset": 2048, 00:21:50.335 "data_size": 63488 00:21:50.335 } 00:21:50.335 ] 00:21:50.335 }' 00:21:50.335 01:04:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.335 01:04:24 -- common/autotest_common.sh@10 -- # set +x 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.903 01:04:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.903 "name": "raid_bdev1", 00:21:50.903 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:50.903 "strip_size_kb": 0, 00:21:50.903 "state": "online", 00:21:50.903 "raid_level": "raid1", 00:21:50.903 "superblock": true, 00:21:50.903 "num_base_bdevs": 4, 00:21:50.903 "num_base_bdevs_discovered": 3, 00:21:50.903 "num_base_bdevs_operational": 3, 00:21:50.903 "base_bdevs_list": [ 00:21:50.903 { 00:21:50.903 "name": null, 00:21:50.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.903 "is_configured": false, 00:21:50.903 "data_offset": 2048, 00:21:50.903 "data_size": 63488 00:21:50.903 }, 00:21:50.903 { 00:21:50.903 "name": "BaseBdev2", 00:21:50.903 "uuid": "aad847f1-5b3c-5753-9063-7a4a8923cc88", 00:21:50.903 "is_configured": true, 00:21:50.903 "data_offset": 2048, 00:21:50.903 "data_size": 63488 00:21:50.903 }, 00:21:50.903 { 00:21:50.903 "name": "BaseBdev3", 00:21:50.903 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:50.903 "is_configured": true, 00:21:50.903 "data_offset": 2048, 00:21:50.903 "data_size": 63488 00:21:50.903 }, 00:21:50.903 { 00:21:50.903 "name": "BaseBdev4", 00:21:50.904 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:50.904 "is_configured": true, 00:21:50.904 "data_offset": 2048, 00:21:50.904 "data_size": 63488 00:21:50.904 } 00:21:50.904 ] 00:21:50.904 }' 00:21:50.904 01:04:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.904 01:04:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:50.904 01:04:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.904 01:04:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:50.904 01:04:25 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.163 [2024-11-18 01:04:25.464770] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:51.163 [2024-11-18 01:04:25.464981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.163 [2024-11-18 01:04:25.471094] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0 00:21:51.163 [2024-11-18 01:04:25.473672] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.163 01:04:25 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:52.099 01:04:26 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.099 01:04:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.099 01:04:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.099 01:04:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.099 01:04:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.099 01:04:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.359 01:04:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.359 01:04:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.359 "name": "raid_bdev1", 00:21:52.359 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:52.359 "strip_size_kb": 0, 00:21:52.359 "state": "online", 00:21:52.359 "raid_level": "raid1", 00:21:52.359 "superblock": true, 00:21:52.359 "num_base_bdevs": 4, 00:21:52.359 "num_base_bdevs_discovered": 4, 00:21:52.359 "num_base_bdevs_operational": 4, 00:21:52.359 "process": { 00:21:52.359 "type": "rebuild", 00:21:52.359 "target": "spare", 00:21:52.359 "progress": { 00:21:52.359 "blocks": 24576, 00:21:52.359 "percent": 38 00:21:52.359 } 00:21:52.359 }, 00:21:52.359 "base_bdevs_list": [ 00:21:52.359 { 00:21:52.359 "name": "spare", 00:21:52.359 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:52.359 "is_configured": true, 00:21:52.359 "data_offset": 2048, 00:21:52.359 "data_size": 63488 00:21:52.359 }, 00:21:52.359 { 00:21:52.359 "name": "BaseBdev2", 00:21:52.359 "uuid": "aad847f1-5b3c-5753-9063-7a4a8923cc88", 00:21:52.359 "is_configured": true, 00:21:52.359 "data_offset": 2048, 00:21:52.359 "data_size": 63488 00:21:52.359 }, 00:21:52.359 { 00:21:52.359 "name": "BaseBdev3", 00:21:52.359 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:52.359 "is_configured": true, 00:21:52.359 "data_offset": 2048, 00:21:52.359 "data_size": 63488 00:21:52.359 }, 00:21:52.359 { 00:21:52.359 "name": "BaseBdev4", 00:21:52.359 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:52.359 "is_configured": true, 00:21:52.359 "data_offset": 2048, 00:21:52.359 "data_size": 63488 00:21:52.359 } 00:21:52.359 ] 00:21:52.359 }' 00:21:52.359 01:04:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:52.618 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:52.618 01:04:26 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:52.879 [2024-11-18 01:04:27.071605] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:52.879 [2024-11-18 01:04:27.085469] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.879 01:04:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.140 01:04:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.140 "name": "raid_bdev1", 00:21:53.140 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:53.140 "strip_size_kb": 0, 00:21:53.140 "state": "online", 00:21:53.140 "raid_level": "raid1", 00:21:53.140 "superblock": true, 00:21:53.140 "num_base_bdevs": 4, 00:21:53.140 "num_base_bdevs_discovered": 3, 00:21:53.140 "num_base_bdevs_operational": 3, 00:21:53.140 "process": { 00:21:53.140 "type": "rebuild", 00:21:53.140 "target": "spare", 00:21:53.140 "progress": { 00:21:53.140 "blocks": 38912, 00:21:53.140 "percent": 61 00:21:53.140 } 00:21:53.140 }, 00:21:53.140 "base_bdevs_list": [ 00:21:53.140 { 00:21:53.140 "name": "spare", 00:21:53.140 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:53.140 "is_configured": true, 00:21:53.140 "data_offset": 2048, 00:21:53.140 "data_size": 63488 00:21:53.140 }, 00:21:53.140 { 00:21:53.140 "name": null, 00:21:53.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.140 "is_configured": false, 00:21:53.140 "data_offset": 2048, 00:21:53.140 "data_size": 63488 00:21:53.140 }, 00:21:53.140 { 00:21:53.140 "name": "BaseBdev3", 00:21:53.140 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:53.140 "is_configured": true, 00:21:53.140 "data_offset": 2048, 00:21:53.140 "data_size": 63488 00:21:53.140 }, 00:21:53.140 { 00:21:53.140 "name": "BaseBdev4", 00:21:53.140 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:53.140 "is_configured": true, 00:21:53.140 "data_offset": 2048, 00:21:53.140 "data_size": 63488 00:21:53.140 } 00:21:53.140 ] 00:21:53.140 }' 00:21:53.140 01:04:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.140 01:04:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.140 01:04:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@657 -- # local timeout=470 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.399 "name": "raid_bdev1", 00:21:53.399 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:53.399 "strip_size_kb": 0, 00:21:53.399 "state": "online", 00:21:53.399 "raid_level": "raid1", 00:21:53.399 "superblock": true, 00:21:53.399 "num_base_bdevs": 4, 00:21:53.399 "num_base_bdevs_discovered": 3, 00:21:53.399 "num_base_bdevs_operational": 3, 00:21:53.399 "process": { 00:21:53.399 "type": "rebuild", 00:21:53.399 "target": "spare", 00:21:53.399 "progress": { 00:21:53.399 "blocks": 45056, 00:21:53.399 "percent": 70 00:21:53.399 } 00:21:53.399 }, 00:21:53.399 "base_bdevs_list": [ 00:21:53.399 { 00:21:53.399 "name": "spare", 00:21:53.399 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:53.399 "is_configured": true, 00:21:53.399 "data_offset": 2048, 00:21:53.399 "data_size": 63488 00:21:53.399 }, 00:21:53.399 { 00:21:53.399 "name": null, 00:21:53.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.399 "is_configured": false, 00:21:53.399 "data_offset": 2048, 00:21:53.399 "data_size": 63488 00:21:53.399 }, 00:21:53.399 { 00:21:53.399 "name": "BaseBdev3", 00:21:53.399 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:53.399 "is_configured": true, 00:21:53.399 "data_offset": 2048, 00:21:53.399 "data_size": 63488 00:21:53.399 }, 00:21:53.399 { 00:21:53.399 "name": "BaseBdev4", 00:21:53.399 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:53.399 "is_configured": true, 00:21:53.399 "data_offset": 2048, 00:21:53.399 "data_size": 63488 00:21:53.399 } 00:21:53.399 ] 00:21:53.399 }' 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.399 01:04:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.658 01:04:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.658 01:04:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:54.227 [2024-11-18 01:04:28.596892] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:54.227 [2024-11-18 01:04:28.597318] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:54.227 [2024-11-18 01:04:28.597599] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.487 01:04:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.746 01:04:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.746 "name": "raid_bdev1", 00:21:54.746 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:54.746 "strip_size_kb": 0, 00:21:54.746 "state": "online", 00:21:54.746 "raid_level": "raid1", 00:21:54.746 "superblock": true, 00:21:54.746 "num_base_bdevs": 4, 00:21:54.746 "num_base_bdevs_discovered": 3, 00:21:54.746 "num_base_bdevs_operational": 3, 00:21:54.746 "base_bdevs_list": [ 00:21:54.746 { 00:21:54.746 "name": "spare", 00:21:54.746 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:54.746 "is_configured": true, 00:21:54.746 "data_offset": 2048, 00:21:54.746 "data_size": 63488 00:21:54.746 }, 00:21:54.746 { 00:21:54.746 "name": null, 00:21:54.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.746 "is_configured": false, 00:21:54.746 "data_offset": 2048, 00:21:54.746 "data_size": 63488 00:21:54.746 }, 00:21:54.746 { 00:21:54.746 "name": "BaseBdev3", 00:21:54.746 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:54.746 "is_configured": true, 00:21:54.746 "data_offset": 2048, 00:21:54.746 "data_size": 63488 00:21:54.746 }, 00:21:54.746 { 00:21:54.746 "name": "BaseBdev4", 00:21:54.746 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:54.746 "is_configured": true, 00:21:54.746 "data_offset": 2048, 00:21:54.746 "data_size": 63488 00:21:54.746 } 00:21:54.746 ] 00:21:54.746 }' 00:21:54.746 01:04:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.746 01:04:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:54.746 01:04:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@660 -- # break 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:55.006 "name": "raid_bdev1", 00:21:55.006 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:55.006 "strip_size_kb": 0, 00:21:55.006 "state": "online", 00:21:55.006 "raid_level": "raid1", 00:21:55.006 "superblock": true, 00:21:55.006 "num_base_bdevs": 4, 00:21:55.006 "num_base_bdevs_discovered": 3, 00:21:55.006 "num_base_bdevs_operational": 3, 00:21:55.006 "base_bdevs_list": [ 00:21:55.006 { 00:21:55.006 "name": "spare", 00:21:55.006 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:55.006 "is_configured": true, 00:21:55.006 "data_offset": 2048, 00:21:55.006 "data_size": 63488 00:21:55.006 }, 00:21:55.006 { 00:21:55.006 "name": null, 00:21:55.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.006 "is_configured": false, 00:21:55.006 "data_offset": 2048, 00:21:55.006 "data_size": 63488 00:21:55.006 }, 00:21:55.006 { 00:21:55.006 "name": "BaseBdev3", 00:21:55.006 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:55.006 "is_configured": true, 00:21:55.006 "data_offset": 2048, 00:21:55.006 "data_size": 63488 00:21:55.006 }, 00:21:55.006 { 00:21:55.006 "name": "BaseBdev4", 00:21:55.006 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:55.006 "is_configured": true, 00:21:55.006 "data_offset": 2048, 00:21:55.006 "data_size": 63488 00:21:55.006 } 00:21:55.006 ] 00:21:55.006 }' 00:21:55.006 01:04:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.266 01:04:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.525 01:04:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.525 "name": "raid_bdev1", 00:21:55.525 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:55.525 "strip_size_kb": 0, 00:21:55.525 "state": "online", 00:21:55.525 "raid_level": "raid1", 00:21:55.525 "superblock": true, 00:21:55.525 "num_base_bdevs": 4, 00:21:55.525 "num_base_bdevs_discovered": 3, 00:21:55.525 "num_base_bdevs_operational": 3, 00:21:55.525 "base_bdevs_list": [ 00:21:55.525 { 00:21:55.525 "name": "spare", 00:21:55.525 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:55.525 "is_configured": true, 00:21:55.525 "data_offset": 2048, 00:21:55.525 "data_size": 63488 00:21:55.525 }, 00:21:55.525 { 00:21:55.525 "name": null, 00:21:55.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.525 "is_configured": false, 00:21:55.525 "data_offset": 2048, 00:21:55.525 "data_size": 63488 00:21:55.525 }, 00:21:55.525 { 00:21:55.525 "name": "BaseBdev3", 00:21:55.525 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:55.525 "is_configured": true, 00:21:55.525 "data_offset": 2048, 00:21:55.525 "data_size": 63488 00:21:55.525 }, 00:21:55.525 { 00:21:55.525 "name": "BaseBdev4", 00:21:55.525 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:55.525 "is_configured": true, 00:21:55.525 "data_offset": 2048, 00:21:55.525 "data_size": 63488 00:21:55.525 } 00:21:55.525 ] 00:21:55.525 }' 00:21:55.525 01:04:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.525 01:04:29 -- common/autotest_common.sh@10 -- # set +x 00:21:56.092 01:04:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:56.092 [2024-11-18 01:04:30.493159] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.092 [2024-11-18 01:04:30.493441] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.092 [2024-11-18 01:04:30.493719] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.092 [2024-11-18 01:04:30.493931] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.092 [2024-11-18 01:04:30.494011] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:56.350 01:04:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:56.350 01:04:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.350 01:04:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:56.350 01:04:30 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:56.350 01:04:30 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@12 -- # local i 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.350 01:04:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:56.608 /dev/nbd0 00:21:56.608 01:04:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.608 01:04:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.608 01:04:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:56.608 01:04:30 -- common/autotest_common.sh@867 -- # local i 00:21:56.608 01:04:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:56.608 01:04:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:56.608 01:04:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:56.608 01:04:30 -- common/autotest_common.sh@871 -- # break 00:21:56.608 01:04:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:56.608 01:04:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:56.608 01:04:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.608 1+0 records in 00:21:56.608 1+0 records out 00:21:56.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907082 s, 4.5 MB/s 00:21:56.608 01:04:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.608 01:04:30 -- common/autotest_common.sh@884 -- # size=4096 00:21:56.608 01:04:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.608 01:04:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:56.608 01:04:30 -- common/autotest_common.sh@887 -- # return 0 00:21:56.608 01:04:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.608 01:04:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.608 01:04:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:56.866 /dev/nbd1 00:21:56.866 01:04:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:57.123 01:04:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:57.123 01:04:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:57.123 01:04:31 -- common/autotest_common.sh@867 -- # local i 00:21:57.123 01:04:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:57.123 01:04:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:57.123 01:04:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:57.123 01:04:31 -- common/autotest_common.sh@871 -- # break 00:21:57.123 01:04:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:57.123 01:04:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:57.123 01:04:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:57.123 1+0 records in 00:21:57.123 1+0 records out 00:21:57.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599944 s, 6.8 MB/s 00:21:57.123 01:04:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.123 01:04:31 -- common/autotest_common.sh@884 -- # size=4096 00:21:57.123 01:04:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.123 01:04:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:57.123 01:04:31 -- common/autotest_common.sh@887 -- # return 0 00:21:57.123 01:04:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:57.124 01:04:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.124 01:04:31 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:57.124 01:04:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:57.124 01:04:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:57.124 01:04:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:57.124 01:04:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:57.124 01:04:31 -- bdev/nbd_common.sh@51 -- # local i 00:21:57.124 01:04:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.124 01:04:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:57.381 01:04:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:57.381 01:04:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@41 -- # break 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.382 01:04:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@41 -- # break 00:21:57.640 01:04:31 -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.640 01:04:31 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:57.640 01:04:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:57.640 01:04:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:57.640 01:04:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:57.898 01:04:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:57.898 [2024-11-18 01:04:32.296263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:57.898 [2024-11-18 01:04:32.296635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.898 [2024-11-18 01:04:32.296727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:57.898 [2024-11-18 01:04:32.296882] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.898 [2024-11-18 01:04:32.299936] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.898 [2024-11-18 01:04:32.300130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:58.156 [2024-11-18 01:04:32.300328] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:58.156 [2024-11-18 01:04:32.300499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.156 BaseBdev1 00:21:58.156 01:04:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:58.156 01:04:32 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:58.156 01:04:32 -- bdev/bdev_raid.sh@696 -- # continue 00:21:58.156 01:04:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:58.156 01:04:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:58.156 01:04:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:58.156 01:04:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:58.414 [2024-11-18 01:04:32.680545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:58.414 [2024-11-18 01:04:32.680852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.414 [2024-11-18 01:04:32.680984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:58.414 [2024-11-18 01:04:32.681078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.414 [2024-11-18 01:04:32.681656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.415 [2024-11-18 01:04:32.681831] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:58.415 [2024-11-18 01:04:32.682016] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:58.415 [2024-11-18 01:04:32.682110] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:58.415 [2024-11-18 01:04:32.682203] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:58.415 [2024-11-18 01:04:32.682269] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:21:58.415 [2024-11-18 01:04:32.682403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:58.415 BaseBdev3 00:21:58.415 01:04:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:58.415 01:04:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:58.415 01:04:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:58.673 01:04:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:58.673 [2024-11-18 01:04:33.068565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:58.673 [2024-11-18 01:04:33.068939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.673 [2024-11-18 01:04:33.069028] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:58.673 [2024-11-18 01:04:33.069139] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.673 [2024-11-18 01:04:33.069658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.673 [2024-11-18 01:04:33.069864] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:58.673 [2024-11-18 01:04:33.070033] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:58.673 [2024-11-18 01:04:33.070152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:58.673 BaseBdev4 00:21:58.932 01:04:33 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:58.932 01:04:33 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:59.191 [2024-11-18 01:04:33.444662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:59.191 [2024-11-18 01:04:33.445056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.191 [2024-11-18 01:04:33.445132] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:59.191 [2024-11-18 01:04:33.445231] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.191 [2024-11-18 01:04:33.445778] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.191 [2024-11-18 01:04:33.445863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:59.191 [2024-11-18 01:04:33.446045] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:59.191 [2024-11-18 01:04:33.446209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.191 spare 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.191 01:04:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.191 [2024-11-18 01:04:33.546413] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:21:59.191 [2024-11-18 01:04:33.546702] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:59.191 [2024-11-18 01:04:33.546967] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf0b0 00:21:59.191 [2024-11-18 01:04:33.547643] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:21:59.191 [2024-11-18 01:04:33.547756] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:21:59.191 [2024-11-18 01:04:33.548012] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.449 01:04:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.449 "name": "raid_bdev1", 00:21:59.450 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:21:59.450 "strip_size_kb": 0, 00:21:59.450 "state": "online", 00:21:59.450 "raid_level": "raid1", 00:21:59.450 "superblock": true, 00:21:59.450 "num_base_bdevs": 4, 00:21:59.450 "num_base_bdevs_discovered": 3, 00:21:59.450 "num_base_bdevs_operational": 3, 00:21:59.450 "base_bdevs_list": [ 00:21:59.450 { 00:21:59.450 "name": "spare", 00:21:59.450 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:21:59.450 "is_configured": true, 00:21:59.450 "data_offset": 2048, 00:21:59.450 "data_size": 63488 00:21:59.450 }, 00:21:59.450 { 00:21:59.450 "name": null, 00:21:59.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.450 "is_configured": false, 00:21:59.450 "data_offset": 2048, 00:21:59.450 "data_size": 63488 00:21:59.450 }, 00:21:59.450 { 00:21:59.450 "name": "BaseBdev3", 00:21:59.450 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:21:59.450 "is_configured": true, 00:21:59.450 "data_offset": 2048, 00:21:59.450 "data_size": 63488 00:21:59.450 }, 00:21:59.450 { 00:21:59.450 "name": "BaseBdev4", 00:21:59.450 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:21:59.450 "is_configured": true, 00:21:59.450 "data_offset": 2048, 00:21:59.450 "data_size": 63488 00:21:59.450 } 00:21:59.450 ] 00:21:59.450 }' 00:21:59.450 01:04:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.450 01:04:33 -- common/autotest_common.sh@10 -- # set +x 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.016 "name": "raid_bdev1", 00:22:00.016 "uuid": "894f2571-6641-4d94-b5a8-8e40bb554f0a", 00:22:00.016 "strip_size_kb": 0, 00:22:00.016 "state": "online", 00:22:00.016 "raid_level": "raid1", 00:22:00.016 "superblock": true, 00:22:00.016 "num_base_bdevs": 4, 00:22:00.016 "num_base_bdevs_discovered": 3, 00:22:00.016 "num_base_bdevs_operational": 3, 00:22:00.016 "base_bdevs_list": [ 00:22:00.016 { 00:22:00.016 "name": "spare", 00:22:00.016 "uuid": "7797c605-74cc-52f3-a031-008721a045ea", 00:22:00.016 "is_configured": true, 00:22:00.016 "data_offset": 2048, 00:22:00.016 "data_size": 63488 00:22:00.016 }, 00:22:00.016 { 00:22:00.016 "name": null, 00:22:00.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.016 "is_configured": false, 00:22:00.016 "data_offset": 2048, 00:22:00.016 "data_size": 63488 00:22:00.016 }, 00:22:00.016 { 00:22:00.016 "name": "BaseBdev3", 00:22:00.016 "uuid": "d2232ae4-8047-526c-8617-129a36d1664c", 00:22:00.016 "is_configured": true, 00:22:00.016 "data_offset": 2048, 00:22:00.016 "data_size": 63488 00:22:00.016 }, 00:22:00.016 { 00:22:00.016 "name": "BaseBdev4", 00:22:00.016 "uuid": "22949cd5-d1c3-52fd-a780-1ad933a6c709", 00:22:00.016 "is_configured": true, 00:22:00.016 "data_offset": 2048, 00:22:00.016 "data_size": 63488 00:22:00.016 } 00:22:00.016 ] 00:22:00.016 }' 00:22:00.016 01:04:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.275 01:04:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:00.275 01:04:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.275 01:04:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:00.275 01:04:34 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.275 01:04:34 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:00.533 01:04:34 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.533 01:04:34 -- bdev/bdev_raid.sh@709 -- # killprocess 135741 00:22:00.533 01:04:34 -- common/autotest_common.sh@936 -- # '[' -z 135741 ']' 00:22:00.533 01:04:34 -- common/autotest_common.sh@940 -- # kill -0 135741 00:22:00.533 01:04:34 -- common/autotest_common.sh@941 -- # uname 00:22:00.533 01:04:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.533 01:04:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135741 00:22:00.533 01:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:00.533 01:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:00.533 01:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135741' 00:22:00.533 killing process with pid 135741 00:22:00.533 01:04:34 -- common/autotest_common.sh@955 -- # kill 135741 00:22:00.533 Received shutdown signal, test time was about 60.000000 seconds 00:22:00.533 00:22:00.533 Latency(us) 00:22:00.533 [2024-11-18T01:04:34.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.533 [2024-11-18T01:04:34.932Z] =================================================================================================================== 00:22:00.533 [2024-11-18T01:04:34.932Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.533 01:04:34 -- common/autotest_common.sh@960 -- # wait 135741 00:22:00.533 [2024-11-18 01:04:34.708162] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.533 [2024-11-18 01:04:34.708433] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.533 [2024-11-18 01:04:34.708628] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.533 [2024-11-18 01:04:34.708724] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:22:00.533 [2024-11-18 01:04:34.804646] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:01.102 00:22:01.102 real 0m24.380s 00:22:01.102 user 0m34.809s 00:22:01.102 sys 0m5.086s 00:22:01.102 01:04:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:01.102 01:04:35 -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 ************************************ 00:22:01.102 END TEST raid_rebuild_test_sb 00:22:01.102 ************************************ 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:01.102 01:04:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:01.102 01:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.102 01:04:35 -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 ************************************ 00:22:01.102 START TEST raid_rebuild_test_io 00:22:01.102 ************************************ 00:22:01.102 01:04:35 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@544 -- # raid_pid=136361 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136361 /var/tmp/spdk-raid.sock 00:22:01.102 01:04:35 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:01.102 01:04:35 -- common/autotest_common.sh@829 -- # '[' -z 136361 ']' 00:22:01.102 01:04:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:01.102 01:04:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.102 01:04:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:01.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:01.102 01:04:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.102 01:04:35 -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 [2024-11-18 01:04:35.385899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:01.102 [2024-11-18 01:04:35.386449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136361 ] 00:22:01.102 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:01.102 Zero copy mechanism will not be used. 00:22:01.361 [2024-11-18 01:04:35.539864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.361 [2024-11-18 01:04:35.619329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.361 [2024-11-18 01:04:35.698033] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.927 01:04:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.927 01:04:36 -- common/autotest_common.sh@862 -- # return 0 00:22:01.927 01:04:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.927 01:04:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:01.927 01:04:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:02.186 BaseBdev1 00:22:02.186 01:04:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:02.186 01:04:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:02.186 01:04:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.445 BaseBdev2 00:22:02.445 01:04:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:02.445 01:04:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:02.445 01:04:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:02.704 BaseBdev3 00:22:02.704 01:04:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:02.704 01:04:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:02.704 01:04:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:02.962 BaseBdev4 00:22:02.962 01:04:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:03.220 spare_malloc 00:22:03.220 01:04:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:03.220 spare_delay 00:22:03.220 01:04:37 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:03.493 [2024-11-18 01:04:37.757952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:03.493 [2024-11-18 01:04:37.758363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.493 [2024-11-18 01:04:37.758457] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:03.493 [2024-11-18 01:04:37.758590] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.494 [2024-11-18 01:04:37.761607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.494 [2024-11-18 01:04:37.761776] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:03.494 spare 00:22:03.494 01:04:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:03.805 [2024-11-18 01:04:37.954276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.805 [2024-11-18 01:04:37.957056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.805 [2024-11-18 01:04:37.957230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.805 [2024-11-18 01:04:37.957293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:03.805 [2024-11-18 01:04:37.957529] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:22:03.805 [2024-11-18 01:04:37.957614] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:03.805 [2024-11-18 01:04:37.957867] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:22:03.805 [2024-11-18 01:04:37.958434] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:22:03.805 [2024-11-18 01:04:37.958544] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:22:03.805 [2024-11-18 01:04:37.958888] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.805 01:04:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.805 01:04:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.805 "name": "raid_bdev1", 00:22:03.805 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:03.805 "strip_size_kb": 0, 00:22:03.805 "state": "online", 00:22:03.805 "raid_level": "raid1", 00:22:03.805 "superblock": false, 00:22:03.805 "num_base_bdevs": 4, 00:22:03.805 "num_base_bdevs_discovered": 4, 00:22:03.805 "num_base_bdevs_operational": 4, 00:22:03.805 "base_bdevs_list": [ 00:22:03.805 { 00:22:03.805 "name": "BaseBdev1", 00:22:03.805 "uuid": "49366aaa-3a8e-4440-a938-d814c2c7044b", 00:22:03.805 "is_configured": true, 00:22:03.805 "data_offset": 0, 00:22:03.805 "data_size": 65536 00:22:03.805 }, 00:22:03.805 { 00:22:03.805 "name": "BaseBdev2", 00:22:03.805 "uuid": "1514ff7f-1918-4baf-aaa6-eb02fdb736b0", 00:22:03.805 "is_configured": true, 00:22:03.805 "data_offset": 0, 00:22:03.805 "data_size": 65536 00:22:03.805 }, 00:22:03.805 { 00:22:03.805 "name": "BaseBdev3", 00:22:03.805 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:03.805 "is_configured": true, 00:22:03.805 "data_offset": 0, 00:22:03.805 "data_size": 65536 00:22:03.805 }, 00:22:03.805 { 00:22:03.805 "name": "BaseBdev4", 00:22:03.805 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:03.805 "is_configured": true, 00:22:03.805 "data_offset": 0, 00:22:03.805 "data_size": 65536 00:22:03.805 } 00:22:03.805 ] 00:22:03.805 }' 00:22:03.805 01:04:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.805 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:22:04.745 01:04:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:04.745 01:04:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:04.745 [2024-11-18 01:04:39.019289] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.745 01:04:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:04.745 01:04:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:04.745 01:04:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.017 01:04:39 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:05.017 01:04:39 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:05.017 01:04:39 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:05.017 01:04:39 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:05.017 [2024-11-18 01:04:39.398795] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:22:05.017 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:05.017 Zero copy mechanism will not be used. 00:22:05.017 Running I/O for 60 seconds... 00:22:05.276 [2024-11-18 01:04:39.551707] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:05.276 [2024-11-18 01:04:39.563624] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.276 01:04:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.534 01:04:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.535 "name": "raid_bdev1", 00:22:05.535 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:05.535 "strip_size_kb": 0, 00:22:05.535 "state": "online", 00:22:05.535 "raid_level": "raid1", 00:22:05.535 "superblock": false, 00:22:05.535 "num_base_bdevs": 4, 00:22:05.535 "num_base_bdevs_discovered": 3, 00:22:05.535 "num_base_bdevs_operational": 3, 00:22:05.535 "base_bdevs_list": [ 00:22:05.535 { 00:22:05.535 "name": null, 00:22:05.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.535 "is_configured": false, 00:22:05.535 "data_offset": 0, 00:22:05.535 "data_size": 65536 00:22:05.535 }, 00:22:05.535 { 00:22:05.535 "name": "BaseBdev2", 00:22:05.535 "uuid": "1514ff7f-1918-4baf-aaa6-eb02fdb736b0", 00:22:05.535 "is_configured": true, 00:22:05.535 "data_offset": 0, 00:22:05.535 "data_size": 65536 00:22:05.535 }, 00:22:05.535 { 00:22:05.535 "name": "BaseBdev3", 00:22:05.535 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:05.535 "is_configured": true, 00:22:05.535 "data_offset": 0, 00:22:05.535 "data_size": 65536 00:22:05.535 }, 00:22:05.535 { 00:22:05.535 "name": "BaseBdev4", 00:22:05.535 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:05.535 "is_configured": true, 00:22:05.535 "data_offset": 0, 00:22:05.535 "data_size": 65536 00:22:05.535 } 00:22:05.535 ] 00:22:05.535 }' 00:22:05.535 01:04:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.535 01:04:39 -- common/autotest_common.sh@10 -- # set +x 00:22:06.102 01:04:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:06.102 [2024-11-18 01:04:40.489166] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:06.102 [2024-11-18 01:04:40.489516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.360 [2024-11-18 01:04:40.524691] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:06.360 [2024-11-18 01:04:40.527523] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.360 01:04:40 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:06.360 [2024-11-18 01:04:40.644864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:06.360 [2024-11-18 01:04:40.645774] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:06.619 [2024-11-18 01:04:40.857959] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:06.619 [2024-11-18 01:04:40.858624] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:06.877 [2024-11-18 01:04:41.112127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:06.877 [2024-11-18 01:04:41.113955] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:07.135 [2024-11-18 01:04:41.342723] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:07.135 [2024-11-18 01:04:41.343337] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:07.394 "name": "raid_bdev1", 00:22:07.394 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:07.394 "strip_size_kb": 0, 00:22:07.394 "state": "online", 00:22:07.394 "raid_level": "raid1", 00:22:07.394 "superblock": false, 00:22:07.394 "num_base_bdevs": 4, 00:22:07.394 "num_base_bdevs_discovered": 4, 00:22:07.394 "num_base_bdevs_operational": 4, 00:22:07.394 "process": { 00:22:07.394 "type": "rebuild", 00:22:07.394 "target": "spare", 00:22:07.394 "progress": { 00:22:07.394 "blocks": 14336, 00:22:07.394 "percent": 21 00:22:07.394 } 00:22:07.394 }, 00:22:07.394 "base_bdevs_list": [ 00:22:07.394 { 00:22:07.394 "name": "spare", 00:22:07.394 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:07.394 "is_configured": true, 00:22:07.394 "data_offset": 0, 00:22:07.394 "data_size": 65536 00:22:07.394 }, 00:22:07.394 { 00:22:07.394 "name": "BaseBdev2", 00:22:07.394 "uuid": "1514ff7f-1918-4baf-aaa6-eb02fdb736b0", 00:22:07.394 "is_configured": true, 00:22:07.394 "data_offset": 0, 00:22:07.394 "data_size": 65536 00:22:07.394 }, 00:22:07.394 { 00:22:07.394 "name": "BaseBdev3", 00:22:07.394 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:07.394 "is_configured": true, 00:22:07.394 "data_offset": 0, 00:22:07.394 "data_size": 65536 00:22:07.394 }, 00:22:07.394 { 00:22:07.394 "name": "BaseBdev4", 00:22:07.394 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:07.394 "is_configured": true, 00:22:07.394 "data_offset": 0, 00:22:07.394 "data_size": 65536 00:22:07.394 } 00:22:07.394 ] 00:22:07.394 }' 00:22:07.394 01:04:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:07.653 01:04:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.653 01:04:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:07.653 [2024-11-18 01:04:41.828611] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:07.653 01:04:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.653 01:04:41 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:07.911 [2024-11-18 01:04:42.077039] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:07.911 [2024-11-18 01:04:42.247110] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:07.911 [2024-11-18 01:04:42.266165] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.911 [2024-11-18 01:04:42.295758] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.170 01:04:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.428 01:04:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.428 "name": "raid_bdev1", 00:22:08.428 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:08.428 "strip_size_kb": 0, 00:22:08.428 "state": "online", 00:22:08.428 "raid_level": "raid1", 00:22:08.428 "superblock": false, 00:22:08.428 "num_base_bdevs": 4, 00:22:08.428 "num_base_bdevs_discovered": 3, 00:22:08.428 "num_base_bdevs_operational": 3, 00:22:08.428 "base_bdevs_list": [ 00:22:08.428 { 00:22:08.428 "name": null, 00:22:08.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.428 "is_configured": false, 00:22:08.428 "data_offset": 0, 00:22:08.428 "data_size": 65536 00:22:08.428 }, 00:22:08.428 { 00:22:08.428 "name": "BaseBdev2", 00:22:08.428 "uuid": "1514ff7f-1918-4baf-aaa6-eb02fdb736b0", 00:22:08.428 "is_configured": true, 00:22:08.428 "data_offset": 0, 00:22:08.428 "data_size": 65536 00:22:08.428 }, 00:22:08.428 { 00:22:08.428 "name": "BaseBdev3", 00:22:08.428 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:08.428 "is_configured": true, 00:22:08.428 "data_offset": 0, 00:22:08.428 "data_size": 65536 00:22:08.428 }, 00:22:08.428 { 00:22:08.428 "name": "BaseBdev4", 00:22:08.428 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:08.428 "is_configured": true, 00:22:08.428 "data_offset": 0, 00:22:08.428 "data_size": 65536 00:22:08.428 } 00:22:08.428 ] 00:22:08.428 }' 00:22:08.428 01:04:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.428 01:04:42 -- common/autotest_common.sh@10 -- # set +x 00:22:08.994 01:04:43 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.994 01:04:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:08.994 01:04:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:08.994 01:04:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:08.994 01:04:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:08.994 01:04:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.994 01:04:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.252 01:04:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.252 "name": "raid_bdev1", 00:22:09.252 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:09.252 "strip_size_kb": 0, 00:22:09.252 "state": "online", 00:22:09.252 "raid_level": "raid1", 00:22:09.252 "superblock": false, 00:22:09.252 "num_base_bdevs": 4, 00:22:09.252 "num_base_bdevs_discovered": 3, 00:22:09.252 "num_base_bdevs_operational": 3, 00:22:09.252 "base_bdevs_list": [ 00:22:09.252 { 00:22:09.252 "name": null, 00:22:09.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.252 "is_configured": false, 00:22:09.252 "data_offset": 0, 00:22:09.252 "data_size": 65536 00:22:09.252 }, 00:22:09.252 { 00:22:09.252 "name": "BaseBdev2", 00:22:09.252 "uuid": "1514ff7f-1918-4baf-aaa6-eb02fdb736b0", 00:22:09.252 "is_configured": true, 00:22:09.252 "data_offset": 0, 00:22:09.252 "data_size": 65536 00:22:09.252 }, 00:22:09.252 { 00:22:09.252 "name": "BaseBdev3", 00:22:09.252 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:09.252 "is_configured": true, 00:22:09.252 "data_offset": 0, 00:22:09.252 "data_size": 65536 00:22:09.252 }, 00:22:09.252 { 00:22:09.252 "name": "BaseBdev4", 00:22:09.252 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:09.252 "is_configured": true, 00:22:09.252 "data_offset": 0, 00:22:09.252 "data_size": 65536 00:22:09.252 } 00:22:09.252 ] 00:22:09.252 }' 00:22:09.252 01:04:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.252 01:04:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.252 01:04:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.252 01:04:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:09.253 01:04:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.511 [2024-11-18 01:04:43.815815] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:09.511 [2024-11-18 01:04:43.816151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.511 [2024-11-18 01:04:43.852253] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:09.511 [2024-11-18 01:04:43.855066] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.511 01:04:43 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:09.770 [2024-11-18 01:04:43.978861] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:09.770 [2024-11-18 01:04:43.980746] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:10.028 [2024-11-18 01:04:44.186736] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:10.028 [2024-11-18 01:04:44.187869] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:10.287 [2024-11-18 01:04:44.527417] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:10.287 [2024-11-18 01:04:44.668796] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:10.546 01:04:44 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.546 01:04:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.546 01:04:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.546 01:04:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.546 01:04:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.546 01:04:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.546 01:04:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.804 [2024-11-18 01:04:45.039764] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:10.804 [2024-11-18 01:04:45.041474] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:10.804 01:04:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:10.804 "name": "raid_bdev1", 00:22:10.804 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:10.804 "strip_size_kb": 0, 00:22:10.804 "state": "online", 00:22:10.804 "raid_level": "raid1", 00:22:10.804 "superblock": false, 00:22:10.804 "num_base_bdevs": 4, 00:22:10.804 "num_base_bdevs_discovered": 4, 00:22:10.804 "num_base_bdevs_operational": 4, 00:22:10.804 "process": { 00:22:10.804 "type": "rebuild", 00:22:10.804 "target": "spare", 00:22:10.804 "progress": { 00:22:10.804 "blocks": 14336, 00:22:10.804 "percent": 21 00:22:10.804 } 00:22:10.804 }, 00:22:10.804 "base_bdevs_list": [ 00:22:10.804 { 00:22:10.804 "name": "spare", 00:22:10.804 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:10.804 "is_configured": true, 00:22:10.804 "data_offset": 0, 00:22:10.804 "data_size": 65536 00:22:10.804 }, 00:22:10.804 { 00:22:10.804 "name": "BaseBdev2", 00:22:10.804 "uuid": "1514ff7f-1918-4baf-aaa6-eb02fdb736b0", 00:22:10.804 "is_configured": true, 00:22:10.804 "data_offset": 0, 00:22:10.804 "data_size": 65536 00:22:10.804 }, 00:22:10.804 { 00:22:10.804 "name": "BaseBdev3", 00:22:10.804 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:10.804 "is_configured": true, 00:22:10.804 "data_offset": 0, 00:22:10.804 "data_size": 65536 00:22:10.804 }, 00:22:10.804 { 00:22:10.804 "name": "BaseBdev4", 00:22:10.804 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:10.804 "is_configured": true, 00:22:10.804 "data_offset": 0, 00:22:10.804 "data_size": 65536 00:22:10.804 } 00:22:10.804 ] 00:22:10.804 }' 00:22:10.804 01:04:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:10.804 01:04:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.804 01:04:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.063 01:04:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.063 01:04:45 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:11.063 01:04:45 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:11.063 01:04:45 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:11.063 01:04:45 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:11.063 01:04:45 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:11.063 [2024-11-18 01:04:45.286945] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:11.063 [2024-11-18 01:04:45.441890] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.321 [2024-11-18 01:04:45.529832] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002390 00:22:11.321 [2024-11-18 01:04:45.530196] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002600 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.321 01:04:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.321 [2024-11-18 01:04:45.641751] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:11.321 [2024-11-18 01:04:45.642385] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.580 "name": "raid_bdev1", 00:22:11.580 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:11.580 "strip_size_kb": 0, 00:22:11.580 "state": "online", 00:22:11.580 "raid_level": "raid1", 00:22:11.580 "superblock": false, 00:22:11.580 "num_base_bdevs": 4, 00:22:11.580 "num_base_bdevs_discovered": 3, 00:22:11.580 "num_base_bdevs_operational": 3, 00:22:11.580 "process": { 00:22:11.580 "type": "rebuild", 00:22:11.580 "target": "spare", 00:22:11.580 "progress": { 00:22:11.580 "blocks": 22528, 00:22:11.580 "percent": 34 00:22:11.580 } 00:22:11.580 }, 00:22:11.580 "base_bdevs_list": [ 00:22:11.580 { 00:22:11.580 "name": "spare", 00:22:11.580 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:11.580 "is_configured": true, 00:22:11.580 "data_offset": 0, 00:22:11.580 "data_size": 65536 00:22:11.580 }, 00:22:11.580 { 00:22:11.580 "name": null, 00:22:11.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.580 "is_configured": false, 00:22:11.580 "data_offset": 0, 00:22:11.580 "data_size": 65536 00:22:11.580 }, 00:22:11.580 { 00:22:11.580 "name": "BaseBdev3", 00:22:11.580 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:11.580 "is_configured": true, 00:22:11.580 "data_offset": 0, 00:22:11.580 "data_size": 65536 00:22:11.580 }, 00:22:11.580 { 00:22:11.580 "name": "BaseBdev4", 00:22:11.580 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:11.580 "is_configured": true, 00:22:11.580 "data_offset": 0, 00:22:11.580 "data_size": 65536 00:22:11.580 } 00:22:11.580 ] 00:22:11.580 }' 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@657 -- # local timeout=488 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.580 01:04:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.838 [2024-11-18 01:04:46.010353] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:11.838 01:04:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.838 "name": "raid_bdev1", 00:22:11.838 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:11.838 "strip_size_kb": 0, 00:22:11.838 "state": "online", 00:22:11.838 "raid_level": "raid1", 00:22:11.838 "superblock": false, 00:22:11.838 "num_base_bdevs": 4, 00:22:11.838 "num_base_bdevs_discovered": 3, 00:22:11.838 "num_base_bdevs_operational": 3, 00:22:11.838 "process": { 00:22:11.838 "type": "rebuild", 00:22:11.838 "target": "spare", 00:22:11.838 "progress": { 00:22:11.838 "blocks": 28672, 00:22:11.838 "percent": 43 00:22:11.838 } 00:22:11.838 }, 00:22:11.838 "base_bdevs_list": [ 00:22:11.838 { 00:22:11.838 "name": "spare", 00:22:11.838 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:11.838 "is_configured": true, 00:22:11.838 "data_offset": 0, 00:22:11.839 "data_size": 65536 00:22:11.839 }, 00:22:11.839 { 00:22:11.839 "name": null, 00:22:11.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.839 "is_configured": false, 00:22:11.839 "data_offset": 0, 00:22:11.839 "data_size": 65536 00:22:11.839 }, 00:22:11.839 { 00:22:11.839 "name": "BaseBdev3", 00:22:11.839 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:11.839 "is_configured": true, 00:22:11.839 "data_offset": 0, 00:22:11.839 "data_size": 65536 00:22:11.839 }, 00:22:11.839 { 00:22:11.839 "name": "BaseBdev4", 00:22:11.839 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:11.839 "is_configured": true, 00:22:11.839 "data_offset": 0, 00:22:11.839 "data_size": 65536 00:22:11.839 } 00:22:11.839 ] 00:22:11.839 }' 00:22:11.839 01:04:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.839 01:04:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.839 01:04:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.839 01:04:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.839 01:04:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:11.839 [2024-11-18 01:04:46.229806] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:12.097 [2024-11-18 01:04:46.356778] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:12.664 [2024-11-18 01:04:47.031636] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.923 01:04:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.181 01:04:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.181 "name": "raid_bdev1", 00:22:13.181 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:13.181 "strip_size_kb": 0, 00:22:13.181 "state": "online", 00:22:13.181 "raid_level": "raid1", 00:22:13.181 "superblock": false, 00:22:13.181 "num_base_bdevs": 4, 00:22:13.181 "num_base_bdevs_discovered": 3, 00:22:13.181 "num_base_bdevs_operational": 3, 00:22:13.181 "process": { 00:22:13.181 "type": "rebuild", 00:22:13.181 "target": "spare", 00:22:13.181 "progress": { 00:22:13.181 "blocks": 51200, 00:22:13.181 "percent": 78 00:22:13.181 } 00:22:13.181 }, 00:22:13.181 "base_bdevs_list": [ 00:22:13.181 { 00:22:13.181 "name": "spare", 00:22:13.181 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:13.181 "is_configured": true, 00:22:13.181 "data_offset": 0, 00:22:13.181 "data_size": 65536 00:22:13.181 }, 00:22:13.181 { 00:22:13.181 "name": null, 00:22:13.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.181 "is_configured": false, 00:22:13.181 "data_offset": 0, 00:22:13.181 "data_size": 65536 00:22:13.181 }, 00:22:13.181 { 00:22:13.181 "name": "BaseBdev3", 00:22:13.181 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:13.181 "is_configured": true, 00:22:13.181 "data_offset": 0, 00:22:13.181 "data_size": 65536 00:22:13.181 }, 00:22:13.181 { 00:22:13.181 "name": "BaseBdev4", 00:22:13.181 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:13.181 "is_configured": true, 00:22:13.181 "data_offset": 0, 00:22:13.181 "data_size": 65536 00:22:13.181 } 00:22:13.181 ] 00:22:13.181 }' 00:22:13.181 01:04:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.181 [2024-11-18 01:04:47.472747] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:13.181 01:04:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.181 01:04:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.181 01:04:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.181 01:04:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:13.748 [2024-11-18 01:04:48.138049] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:14.007 [2024-11-18 01:04:48.238096] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:14.007 [2024-11-18 01:04:48.248161] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.265 01:04:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.524 "name": "raid_bdev1", 00:22:14.524 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:14.524 "strip_size_kb": 0, 00:22:14.524 "state": "online", 00:22:14.524 "raid_level": "raid1", 00:22:14.524 "superblock": false, 00:22:14.524 "num_base_bdevs": 4, 00:22:14.524 "num_base_bdevs_discovered": 3, 00:22:14.524 "num_base_bdevs_operational": 3, 00:22:14.524 "base_bdevs_list": [ 00:22:14.524 { 00:22:14.524 "name": "spare", 00:22:14.524 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:14.524 "is_configured": true, 00:22:14.524 "data_offset": 0, 00:22:14.524 "data_size": 65536 00:22:14.524 }, 00:22:14.524 { 00:22:14.524 "name": null, 00:22:14.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.524 "is_configured": false, 00:22:14.524 "data_offset": 0, 00:22:14.524 "data_size": 65536 00:22:14.524 }, 00:22:14.524 { 00:22:14.524 "name": "BaseBdev3", 00:22:14.524 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:14.524 "is_configured": true, 00:22:14.524 "data_offset": 0, 00:22:14.524 "data_size": 65536 00:22:14.524 }, 00:22:14.524 { 00:22:14.524 "name": "BaseBdev4", 00:22:14.524 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:14.524 "is_configured": true, 00:22:14.524 "data_offset": 0, 00:22:14.524 "data_size": 65536 00:22:14.524 } 00:22:14.524 ] 00:22:14.524 }' 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@660 -- # break 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.524 01:04:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.783 01:04:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.783 "name": "raid_bdev1", 00:22:14.783 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:14.783 "strip_size_kb": 0, 00:22:14.783 "state": "online", 00:22:14.783 "raid_level": "raid1", 00:22:14.783 "superblock": false, 00:22:14.783 "num_base_bdevs": 4, 00:22:14.783 "num_base_bdevs_discovered": 3, 00:22:14.783 "num_base_bdevs_operational": 3, 00:22:14.783 "base_bdevs_list": [ 00:22:14.783 { 00:22:14.783 "name": "spare", 00:22:14.783 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:14.783 "is_configured": true, 00:22:14.783 "data_offset": 0, 00:22:14.783 "data_size": 65536 00:22:14.783 }, 00:22:14.783 { 00:22:14.783 "name": null, 00:22:14.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.783 "is_configured": false, 00:22:14.783 "data_offset": 0, 00:22:14.783 "data_size": 65536 00:22:14.783 }, 00:22:14.783 { 00:22:14.783 "name": "BaseBdev3", 00:22:14.783 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:14.783 "is_configured": true, 00:22:14.783 "data_offset": 0, 00:22:14.783 "data_size": 65536 00:22:14.783 }, 00:22:14.783 { 00:22:14.783 "name": "BaseBdev4", 00:22:14.783 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:14.783 "is_configured": true, 00:22:14.783 "data_offset": 0, 00:22:14.783 "data_size": 65536 00:22:14.783 } 00:22:14.783 ] 00:22:14.783 }' 00:22:14.783 01:04:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.783 01:04:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.783 01:04:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.040 01:04:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.298 01:04:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.298 "name": "raid_bdev1", 00:22:15.298 "uuid": "8480960b-6bf8-4344-9819-3455927e67b4", 00:22:15.298 "strip_size_kb": 0, 00:22:15.298 "state": "online", 00:22:15.298 "raid_level": "raid1", 00:22:15.298 "superblock": false, 00:22:15.298 "num_base_bdevs": 4, 00:22:15.298 "num_base_bdevs_discovered": 3, 00:22:15.298 "num_base_bdevs_operational": 3, 00:22:15.298 "base_bdevs_list": [ 00:22:15.298 { 00:22:15.298 "name": "spare", 00:22:15.298 "uuid": "780723fa-320a-5051-a9c5-bdb7b4a21f93", 00:22:15.298 "is_configured": true, 00:22:15.298 "data_offset": 0, 00:22:15.298 "data_size": 65536 00:22:15.298 }, 00:22:15.298 { 00:22:15.298 "name": null, 00:22:15.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.298 "is_configured": false, 00:22:15.298 "data_offset": 0, 00:22:15.298 "data_size": 65536 00:22:15.298 }, 00:22:15.298 { 00:22:15.298 "name": "BaseBdev3", 00:22:15.298 "uuid": "e0bf588a-f14c-4e43-b1b8-2775d2333802", 00:22:15.298 "is_configured": true, 00:22:15.298 "data_offset": 0, 00:22:15.298 "data_size": 65536 00:22:15.298 }, 00:22:15.298 { 00:22:15.298 "name": "BaseBdev4", 00:22:15.298 "uuid": "73c1f36c-f860-4f65-9121-56586bc2219d", 00:22:15.298 "is_configured": true, 00:22:15.298 "data_offset": 0, 00:22:15.298 "data_size": 65536 00:22:15.298 } 00:22:15.298 ] 00:22:15.298 }' 00:22:15.298 01:04:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.298 01:04:49 -- common/autotest_common.sh@10 -- # set +x 00:22:15.864 01:04:50 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.864 [2024-11-18 01:04:50.265508] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.864 [2024-11-18 01:04:50.265758] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.123 00:22:16.123 Latency(us) 00:22:16.123 [2024-11-18T01:04:50.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.123 [2024-11-18T01:04:50.522Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:16.123 raid_bdev1 : 10.93 98.65 295.95 0.00 0.00 14395.35 298.42 120336.58 00:22:16.123 [2024-11-18T01:04:50.522Z] =================================================================================================================== 00:22:16.123 [2024-11-18T01:04:50.522Z] Total : 98.65 295.95 0.00 0.00 14395.35 298.42 120336.58 00:22:16.123 [2024-11-18 01:04:50.334946] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.123 [2024-11-18 01:04:50.335156] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.123 [2024-11-18 01:04:50.335297] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.123 [2024-11-18 01:04:50.335385] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:22:16.123 0 00:22:16.123 01:04:50 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:16.123 01:04:50 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.382 01:04:50 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:16.382 01:04:50 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:16.382 01:04:50 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@12 -- # local i 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.382 01:04:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:16.641 /dev/nbd0 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:16.641 01:04:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:16.641 01:04:50 -- common/autotest_common.sh@867 -- # local i 00:22:16.641 01:04:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:16.641 01:04:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:16.641 01:04:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:16.641 01:04:50 -- common/autotest_common.sh@871 -- # break 00:22:16.641 01:04:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:16.641 01:04:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:16.641 01:04:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.641 1+0 records in 00:22:16.641 1+0 records out 00:22:16.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524947 s, 7.8 MB/s 00:22:16.641 01:04:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.641 01:04:50 -- common/autotest_common.sh@884 -- # size=4096 00:22:16.641 01:04:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.641 01:04:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:16.641 01:04:50 -- common/autotest_common.sh@887 -- # return 0 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.641 01:04:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:16.641 01:04:50 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:16.641 01:04:50 -- bdev/bdev_raid.sh@678 -- # continue 00:22:16.641 01:04:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:16.641 01:04:50 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:16.641 01:04:50 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@12 -- # local i 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.641 01:04:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:16.899 /dev/nbd1 00:22:16.899 01:04:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:16.899 01:04:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:16.899 01:04:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:16.899 01:04:51 -- common/autotest_common.sh@867 -- # local i 00:22:16.899 01:04:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:16.899 01:04:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:16.899 01:04:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:16.899 01:04:51 -- common/autotest_common.sh@871 -- # break 00:22:16.899 01:04:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:16.899 01:04:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:16.899 01:04:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.899 1+0 records in 00:22:16.899 1+0 records out 00:22:16.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574803 s, 7.1 MB/s 00:22:16.899 01:04:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.899 01:04:51 -- common/autotest_common.sh@884 -- # size=4096 00:22:16.899 01:04:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.899 01:04:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:16.899 01:04:51 -- common/autotest_common.sh@887 -- # return 0 00:22:16.899 01:04:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.899 01:04:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.899 01:04:51 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:17.158 01:04:51 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@51 -- # local i 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@41 -- # break 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.158 01:04:51 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:17.158 01:04:51 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:17.158 01:04:51 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@12 -- # local i 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.158 01:04:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:17.427 /dev/nbd1 00:22:17.427 01:04:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:17.427 01:04:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:17.427 01:04:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:17.427 01:04:51 -- common/autotest_common.sh@867 -- # local i 00:22:17.427 01:04:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:17.427 01:04:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:17.427 01:04:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:17.427 01:04:51 -- common/autotest_common.sh@871 -- # break 00:22:17.427 01:04:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:17.427 01:04:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:17.427 01:04:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.427 1+0 records in 00:22:17.427 1+0 records out 00:22:17.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457378 s, 9.0 MB/s 00:22:17.427 01:04:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.427 01:04:51 -- common/autotest_common.sh@884 -- # size=4096 00:22:17.427 01:04:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.427 01:04:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:17.427 01:04:51 -- common/autotest_common.sh@887 -- # return 0 00:22:17.427 01:04:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.427 01:04:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.427 01:04:51 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:17.703 01:04:51 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:17.703 01:04:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.703 01:04:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:17.703 01:04:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.703 01:04:51 -- bdev/nbd_common.sh@51 -- # local i 00:22:17.703 01:04:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.703 01:04:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@41 -- # break 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.961 01:04:52 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@51 -- # local i 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.961 01:04:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@41 -- # break 00:22:18.220 01:04:52 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.220 01:04:52 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:18.220 01:04:52 -- bdev/bdev_raid.sh@709 -- # killprocess 136361 00:22:18.220 01:04:52 -- common/autotest_common.sh@936 -- # '[' -z 136361 ']' 00:22:18.220 01:04:52 -- common/autotest_common.sh@940 -- # kill -0 136361 00:22:18.220 01:04:52 -- common/autotest_common.sh@941 -- # uname 00:22:18.220 01:04:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.220 01:04:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136361 00:22:18.220 01:04:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:18.220 01:04:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:18.220 01:04:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136361' 00:22:18.220 killing process with pid 136361 00:22:18.220 01:04:52 -- common/autotest_common.sh@955 -- # kill 136361 00:22:18.220 Received shutdown signal, test time was about 13.022627 seconds 00:22:18.220 00:22:18.220 Latency(us) 00:22:18.220 [2024-11-18T01:04:52.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.220 [2024-11-18T01:04:52.619Z] =================================================================================================================== 00:22:18.220 [2024-11-18T01:04:52.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.220 01:04:52 -- common/autotest_common.sh@960 -- # wait 136361 00:22:18.220 [2024-11-18 01:04:52.424771] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:18.220 [2024-11-18 01:04:52.509510] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.788 01:04:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:18.788 00:22:18.788 real 0m17.616s 00:22:18.788 user 0m26.517s 00:22:18.788 sys 0m3.239s 00:22:18.788 01:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:18.788 01:04:52 -- common/autotest_common.sh@10 -- # set +x 00:22:18.788 ************************************ 00:22:18.788 END TEST raid_rebuild_test_io 00:22:18.788 ************************************ 00:22:18.788 01:04:52 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:18.788 01:04:52 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:18.788 01:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:18.788 01:04:52 -- common/autotest_common.sh@10 -- # set +x 00:22:18.788 ************************************ 00:22:18.788 START TEST raid_rebuild_test_sb_io 00:22:18.788 ************************************ 00:22:18.788 01:04:52 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true 00:22:18.788 01:04:52 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:18.788 01:04:52 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:18.788 01:04:52 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@544 -- # raid_pid=136861 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:18.788 01:04:53 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136861 /var/tmp/spdk-raid.sock 00:22:18.788 01:04:53 -- common/autotest_common.sh@829 -- # '[' -z 136861 ']' 00:22:18.788 01:04:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:18.788 01:04:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.788 01:04:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:18.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:18.789 01:04:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.789 01:04:53 -- common/autotest_common.sh@10 -- # set +x 00:22:18.789 [2024-11-18 01:04:53.070817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:18.789 [2024-11-18 01:04:53.071249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136861 ] 00:22:18.789 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:18.789 Zero copy mechanism will not be used. 00:22:19.047 [2024-11-18 01:04:53.214376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.047 [2024-11-18 01:04:53.294650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.047 [2024-11-18 01:04:53.373794] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.614 01:04:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.614 01:04:54 -- common/autotest_common.sh@862 -- # return 0 00:22:19.614 01:04:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:19.614 01:04:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:19.614 01:04:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:19.873 BaseBdev1_malloc 00:22:19.873 01:04:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:20.131 [2024-11-18 01:04:54.390848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:20.131 [2024-11-18 01:04:54.391150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.132 [2024-11-18 01:04:54.391244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:20.132 [2024-11-18 01:04:54.391401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.132 [2024-11-18 01:04:54.394444] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.132 [2024-11-18 01:04:54.394630] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:20.132 BaseBdev1 00:22:20.132 01:04:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.132 01:04:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.132 01:04:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:20.390 BaseBdev2_malloc 00:22:20.390 01:04:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:20.649 [2024-11-18 01:04:54.862698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:20.649 [2024-11-18 01:04:54.863080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.649 [2024-11-18 01:04:54.863160] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:20.649 [2024-11-18 01:04:54.863319] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.649 [2024-11-18 01:04:54.866100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.649 [2024-11-18 01:04:54.866298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:20.649 BaseBdev2 00:22:20.649 01:04:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.649 01:04:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.649 01:04:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:20.907 BaseBdev3_malloc 00:22:20.907 01:04:55 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:20.907 [2024-11-18 01:04:55.279867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:20.907 [2024-11-18 01:04:55.280243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.907 [2024-11-18 01:04:55.280326] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:20.907 [2024-11-18 01:04:55.280466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.907 [2024-11-18 01:04:55.283212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.907 [2024-11-18 01:04:55.283373] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:20.907 BaseBdev3 00:22:20.907 01:04:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.907 01:04:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.907 01:04:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:21.166 BaseBdev4_malloc 00:22:21.166 01:04:55 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:21.425 [2024-11-18 01:04:55.743749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:21.425 [2024-11-18 01:04:55.744109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.425 [2024-11-18 01:04:55.744186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:21.425 [2024-11-18 01:04:55.744308] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.425 [2024-11-18 01:04:55.747031] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.425 [2024-11-18 01:04:55.747225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:21.425 BaseBdev4 00:22:21.425 01:04:55 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:21.684 spare_malloc 00:22:21.684 01:04:55 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:21.942 spare_delay 00:22:21.942 01:04:56 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:21.942 [2024-11-18 01:04:56.323640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:21.942 [2024-11-18 01:04:56.324005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.942 [2024-11-18 01:04:56.324080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:21.942 [2024-11-18 01:04:56.324207] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.942 [2024-11-18 01:04:56.327025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.942 [2024-11-18 01:04:56.327197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:21.942 spare 00:22:21.942 01:04:56 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:22.201 [2024-11-18 01:04:56.511799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.201 [2024-11-18 01:04:56.514526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.201 [2024-11-18 01:04:56.514730] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:22.201 [2024-11-18 01:04:56.514813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:22.201 [2024-11-18 01:04:56.515149] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:22.201 [2024-11-18 01:04:56.515236] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:22.201 [2024-11-18 01:04:56.515437] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:22.201 [2024-11-18 01:04:56.515944] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:22.201 [2024-11-18 01:04:56.516044] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:22.201 [2024-11-18 01:04:56.516334] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.201 01:04:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.460 01:04:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.460 "name": "raid_bdev1", 00:22:22.460 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:22.460 "strip_size_kb": 0, 00:22:22.460 "state": "online", 00:22:22.460 "raid_level": "raid1", 00:22:22.460 "superblock": true, 00:22:22.460 "num_base_bdevs": 4, 00:22:22.460 "num_base_bdevs_discovered": 4, 00:22:22.460 "num_base_bdevs_operational": 4, 00:22:22.460 "base_bdevs_list": [ 00:22:22.460 { 00:22:22.460 "name": "BaseBdev1", 00:22:22.460 "uuid": "d224c2ad-bcba-5a2e-a228-0a1de8191fb8", 00:22:22.460 "is_configured": true, 00:22:22.460 "data_offset": 2048, 00:22:22.460 "data_size": 63488 00:22:22.460 }, 00:22:22.460 { 00:22:22.460 "name": "BaseBdev2", 00:22:22.460 "uuid": "152783e4-fd5f-582d-b4b8-be6e22e2e3bd", 00:22:22.460 "is_configured": true, 00:22:22.460 "data_offset": 2048, 00:22:22.460 "data_size": 63488 00:22:22.460 }, 00:22:22.460 { 00:22:22.460 "name": "BaseBdev3", 00:22:22.460 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:22.460 "is_configured": true, 00:22:22.460 "data_offset": 2048, 00:22:22.460 "data_size": 63488 00:22:22.460 }, 00:22:22.460 { 00:22:22.460 "name": "BaseBdev4", 00:22:22.460 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:22.460 "is_configured": true, 00:22:22.460 "data_offset": 2048, 00:22:22.460 "data_size": 63488 00:22:22.460 } 00:22:22.460 ] 00:22:22.460 }' 00:22:22.460 01:04:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.460 01:04:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.027 01:04:57 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:23.027 01:04:57 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:23.285 [2024-11-18 01:04:57.616730] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.285 01:04:57 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:23.285 01:04:57 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:23.285 01:04:57 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.544 01:04:57 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:23.544 01:04:57 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:23.544 01:04:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:23.544 01:04:57 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:23.803 [2024-11-18 01:04:57.996296] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:22:23.803 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:23.803 Zero copy mechanism will not be used. 00:22:23.803 Running I/O for 60 seconds... 00:22:23.803 [2024-11-18 01:04:58.098656] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:23.803 [2024-11-18 01:04:58.104912] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.803 01:04:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.062 01:04:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.062 "name": "raid_bdev1", 00:22:24.062 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:24.062 "strip_size_kb": 0, 00:22:24.062 "state": "online", 00:22:24.062 "raid_level": "raid1", 00:22:24.062 "superblock": true, 00:22:24.062 "num_base_bdevs": 4, 00:22:24.062 "num_base_bdevs_discovered": 3, 00:22:24.062 "num_base_bdevs_operational": 3, 00:22:24.062 "base_bdevs_list": [ 00:22:24.063 { 00:22:24.063 "name": null, 00:22:24.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.063 "is_configured": false, 00:22:24.063 "data_offset": 2048, 00:22:24.063 "data_size": 63488 00:22:24.063 }, 00:22:24.063 { 00:22:24.063 "name": "BaseBdev2", 00:22:24.063 "uuid": "152783e4-fd5f-582d-b4b8-be6e22e2e3bd", 00:22:24.063 "is_configured": true, 00:22:24.063 "data_offset": 2048, 00:22:24.063 "data_size": 63488 00:22:24.063 }, 00:22:24.063 { 00:22:24.063 "name": "BaseBdev3", 00:22:24.063 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:24.063 "is_configured": true, 00:22:24.063 "data_offset": 2048, 00:22:24.063 "data_size": 63488 00:22:24.063 }, 00:22:24.063 { 00:22:24.063 "name": "BaseBdev4", 00:22:24.063 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:24.063 "is_configured": true, 00:22:24.063 "data_offset": 2048, 00:22:24.063 "data_size": 63488 00:22:24.063 } 00:22:24.063 ] 00:22:24.063 }' 00:22:24.063 01:04:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.063 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:22:24.631 01:04:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.901 [2024-11-18 01:04:59.173457] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:24.901 [2024-11-18 01:04:59.173763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.901 01:04:59 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:24.901 [2024-11-18 01:04:59.218195] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:22:24.901 [2024-11-18 01:04:59.220992] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.167 [2024-11-18 01:04:59.340492] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:25.167 [2024-11-18 01:04:59.341378] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:25.167 [2024-11-18 01:04:59.472952] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:25.167 [2024-11-18 01:04:59.474123] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:25.426 [2024-11-18 01:04:59.815485] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:25.426 [2024-11-18 01:04:59.816424] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:25.684 [2024-11-18 01:05:00.037056] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:25.684 [2024-11-18 01:05:00.038272] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:25.942 01:05:00 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.942 01:05:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.942 01:05:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.942 01:05:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.942 01:05:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.942 01:05:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.942 01:05:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.200 [2024-11-18 01:05:00.364584] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:26.201 01:05:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.201 "name": "raid_bdev1", 00:22:26.201 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:26.201 "strip_size_kb": 0, 00:22:26.201 "state": "online", 00:22:26.201 "raid_level": "raid1", 00:22:26.201 "superblock": true, 00:22:26.201 "num_base_bdevs": 4, 00:22:26.201 "num_base_bdevs_discovered": 4, 00:22:26.201 "num_base_bdevs_operational": 4, 00:22:26.201 "process": { 00:22:26.201 "type": "rebuild", 00:22:26.201 "target": "spare", 00:22:26.201 "progress": { 00:22:26.201 "blocks": 14336, 00:22:26.201 "percent": 22 00:22:26.201 } 00:22:26.201 }, 00:22:26.201 "base_bdevs_list": [ 00:22:26.201 { 00:22:26.201 "name": "spare", 00:22:26.201 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:26.201 "is_configured": true, 00:22:26.201 "data_offset": 2048, 00:22:26.201 "data_size": 63488 00:22:26.201 }, 00:22:26.201 { 00:22:26.201 "name": "BaseBdev2", 00:22:26.201 "uuid": "152783e4-fd5f-582d-b4b8-be6e22e2e3bd", 00:22:26.201 "is_configured": true, 00:22:26.201 "data_offset": 2048, 00:22:26.201 "data_size": 63488 00:22:26.201 }, 00:22:26.201 { 00:22:26.201 "name": "BaseBdev3", 00:22:26.201 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:26.201 "is_configured": true, 00:22:26.201 "data_offset": 2048, 00:22:26.201 "data_size": 63488 00:22:26.201 }, 00:22:26.201 { 00:22:26.201 "name": "BaseBdev4", 00:22:26.201 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:26.201 "is_configured": true, 00:22:26.201 "data_offset": 2048, 00:22:26.201 "data_size": 63488 00:22:26.201 } 00:22:26.201 ] 00:22:26.201 }' 00:22:26.201 01:05:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.201 01:05:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.201 01:05:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.201 01:05:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.201 01:05:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:26.201 [2024-11-18 01:05:00.570278] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:26.459 [2024-11-18 01:05:00.802002] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.717 [2024-11-18 01:05:00.898736] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:26.717 [2024-11-18 01:05:01.008054] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:26.717 [2024-11-18 01:05:01.013312] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.718 [2024-11-18 01:05:01.028892] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.718 01:05:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.976 01:05:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.976 "name": "raid_bdev1", 00:22:26.976 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:26.976 "strip_size_kb": 0, 00:22:26.976 "state": "online", 00:22:26.976 "raid_level": "raid1", 00:22:26.976 "superblock": true, 00:22:26.976 "num_base_bdevs": 4, 00:22:26.976 "num_base_bdevs_discovered": 3, 00:22:26.976 "num_base_bdevs_operational": 3, 00:22:26.976 "base_bdevs_list": [ 00:22:26.976 { 00:22:26.976 "name": null, 00:22:26.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.976 "is_configured": false, 00:22:26.976 "data_offset": 2048, 00:22:26.976 "data_size": 63488 00:22:26.976 }, 00:22:26.976 { 00:22:26.976 "name": "BaseBdev2", 00:22:26.976 "uuid": "152783e4-fd5f-582d-b4b8-be6e22e2e3bd", 00:22:26.976 "is_configured": true, 00:22:26.976 "data_offset": 2048, 00:22:26.976 "data_size": 63488 00:22:26.976 }, 00:22:26.976 { 00:22:26.976 "name": "BaseBdev3", 00:22:26.976 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:26.976 "is_configured": true, 00:22:26.976 "data_offset": 2048, 00:22:26.976 "data_size": 63488 00:22:26.976 }, 00:22:26.976 { 00:22:26.976 "name": "BaseBdev4", 00:22:26.976 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:26.976 "is_configured": true, 00:22:26.976 "data_offset": 2048, 00:22:26.976 "data_size": 63488 00:22:26.976 } 00:22:26.976 ] 00:22:26.976 }' 00:22:26.976 01:05:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.976 01:05:01 -- common/autotest_common.sh@10 -- # set +x 00:22:27.544 01:05:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:27.544 01:05:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.544 01:05:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:27.544 01:05:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:27.544 01:05:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.544 01:05:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.544 01:05:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.802 01:05:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.802 "name": "raid_bdev1", 00:22:27.802 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:27.802 "strip_size_kb": 0, 00:22:27.802 "state": "online", 00:22:27.802 "raid_level": "raid1", 00:22:27.802 "superblock": true, 00:22:27.802 "num_base_bdevs": 4, 00:22:27.802 "num_base_bdevs_discovered": 3, 00:22:27.802 "num_base_bdevs_operational": 3, 00:22:27.802 "base_bdevs_list": [ 00:22:27.802 { 00:22:27.802 "name": null, 00:22:27.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.802 "is_configured": false, 00:22:27.802 "data_offset": 2048, 00:22:27.802 "data_size": 63488 00:22:27.802 }, 00:22:27.802 { 00:22:27.802 "name": "BaseBdev2", 00:22:27.802 "uuid": "152783e4-fd5f-582d-b4b8-be6e22e2e3bd", 00:22:27.802 "is_configured": true, 00:22:27.802 "data_offset": 2048, 00:22:27.802 "data_size": 63488 00:22:27.802 }, 00:22:27.802 { 00:22:27.802 "name": "BaseBdev3", 00:22:27.802 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:27.802 "is_configured": true, 00:22:27.802 "data_offset": 2048, 00:22:27.802 "data_size": 63488 00:22:27.802 }, 00:22:27.802 { 00:22:27.802 "name": "BaseBdev4", 00:22:27.802 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:27.802 "is_configured": true, 00:22:27.802 "data_offset": 2048, 00:22:27.802 "data_size": 63488 00:22:27.802 } 00:22:27.802 ] 00:22:27.802 }' 00:22:27.802 01:05:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.802 01:05:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:27.802 01:05:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.061 01:05:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:28.061 01:05:02 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:28.061 [2024-11-18 01:05:02.452741] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:28.061 [2024-11-18 01:05:02.453094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:28.319 [2024-11-18 01:05:02.488704] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:22:28.319 [2024-11-18 01:05:02.491471] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:28.319 01:05:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:28.319 [2024-11-18 01:05:02.610031] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:28.319 [2024-11-18 01:05:02.610925] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:28.578 [2024-11-18 01:05:02.816523] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:28.578 [2024-11-18 01:05:02.817052] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:28.836 [2024-11-18 01:05:03.059346] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:28.836 [2024-11-18 01:05:03.186104] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.403 [2024-11-18 01:05:03.536427] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.403 "name": "raid_bdev1", 00:22:29.403 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:29.403 "strip_size_kb": 0, 00:22:29.403 "state": "online", 00:22:29.403 "raid_level": "raid1", 00:22:29.403 "superblock": true, 00:22:29.403 "num_base_bdevs": 4, 00:22:29.403 "num_base_bdevs_discovered": 4, 00:22:29.403 "num_base_bdevs_operational": 4, 00:22:29.403 "process": { 00:22:29.403 "type": "rebuild", 00:22:29.403 "target": "spare", 00:22:29.403 "progress": { 00:22:29.403 "blocks": 18432, 00:22:29.403 "percent": 29 00:22:29.403 } 00:22:29.403 }, 00:22:29.403 "base_bdevs_list": [ 00:22:29.403 { 00:22:29.403 "name": "spare", 00:22:29.403 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:29.403 "is_configured": true, 00:22:29.403 "data_offset": 2048, 00:22:29.403 "data_size": 63488 00:22:29.403 }, 00:22:29.403 { 00:22:29.403 "name": "BaseBdev2", 00:22:29.403 "uuid": "152783e4-fd5f-582d-b4b8-be6e22e2e3bd", 00:22:29.403 "is_configured": true, 00:22:29.403 "data_offset": 2048, 00:22:29.403 "data_size": 63488 00:22:29.403 }, 00:22:29.403 { 00:22:29.403 "name": "BaseBdev3", 00:22:29.403 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:29.403 "is_configured": true, 00:22:29.403 "data_offset": 2048, 00:22:29.403 "data_size": 63488 00:22:29.403 }, 00:22:29.403 { 00:22:29.403 "name": "BaseBdev4", 00:22:29.403 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:29.403 "is_configured": true, 00:22:29.403 "data_offset": 2048, 00:22:29.403 "data_size": 63488 00:22:29.403 } 00:22:29.403 ] 00:22:29.403 }' 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.403 [2024-11-18 01:05:03.790531] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:29.403 01:05:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.661 01:05:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.662 01:05:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.662 01:05:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:29.662 01:05:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:29.662 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:29.662 01:05:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:29.662 01:05:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:29.662 01:05:03 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:29.662 01:05:03 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:29.662 [2024-11-18 01:05:04.032496] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:29.920 [2024-11-18 01:05:04.233046] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000026d0 00:22:29.920 [2024-11-18 01:05:04.233382] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002940 00:22:30.177 [2024-11-18 01:05:04.347045] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:30.178 [2024-11-18 01:05:04.347673] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.178 01:05:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.436 "name": "raid_bdev1", 00:22:30.436 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:30.436 "strip_size_kb": 0, 00:22:30.436 "state": "online", 00:22:30.436 "raid_level": "raid1", 00:22:30.436 "superblock": true, 00:22:30.436 "num_base_bdevs": 4, 00:22:30.436 "num_base_bdevs_discovered": 3, 00:22:30.436 "num_base_bdevs_operational": 3, 00:22:30.436 "process": { 00:22:30.436 "type": "rebuild", 00:22:30.436 "target": "spare", 00:22:30.436 "progress": { 00:22:30.436 "blocks": 32768, 00:22:30.436 "percent": 51 00:22:30.436 } 00:22:30.436 }, 00:22:30.436 "base_bdevs_list": [ 00:22:30.436 { 00:22:30.436 "name": "spare", 00:22:30.436 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:30.436 "is_configured": true, 00:22:30.436 "data_offset": 2048, 00:22:30.436 "data_size": 63488 00:22:30.436 }, 00:22:30.436 { 00:22:30.436 "name": null, 00:22:30.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.436 "is_configured": false, 00:22:30.436 "data_offset": 2048, 00:22:30.436 "data_size": 63488 00:22:30.436 }, 00:22:30.436 { 00:22:30.436 "name": "BaseBdev3", 00:22:30.436 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:30.436 "is_configured": true, 00:22:30.436 "data_offset": 2048, 00:22:30.436 "data_size": 63488 00:22:30.436 }, 00:22:30.436 { 00:22:30.436 "name": "BaseBdev4", 00:22:30.436 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:30.436 "is_configured": true, 00:22:30.436 "data_offset": 2048, 00:22:30.436 "data_size": 63488 00:22:30.436 } 00:22:30.436 ] 00:22:30.436 }' 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@657 -- # local timeout=507 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.436 01:05:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.437 01:05:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.437 01:05:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.437 01:05:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.437 01:05:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.437 01:05:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.695 [2024-11-18 01:05:04.918720] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:30.695 01:05:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.695 "name": "raid_bdev1", 00:22:30.695 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:30.695 "strip_size_kb": 0, 00:22:30.695 "state": "online", 00:22:30.695 "raid_level": "raid1", 00:22:30.695 "superblock": true, 00:22:30.695 "num_base_bdevs": 4, 00:22:30.695 "num_base_bdevs_discovered": 3, 00:22:30.695 "num_base_bdevs_operational": 3, 00:22:30.695 "process": { 00:22:30.695 "type": "rebuild", 00:22:30.695 "target": "spare", 00:22:30.695 "progress": { 00:22:30.695 "blocks": 38912, 00:22:30.695 "percent": 61 00:22:30.695 } 00:22:30.695 }, 00:22:30.695 "base_bdevs_list": [ 00:22:30.695 { 00:22:30.695 "name": "spare", 00:22:30.695 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:30.695 "is_configured": true, 00:22:30.695 "data_offset": 2048, 00:22:30.695 "data_size": 63488 00:22:30.695 }, 00:22:30.695 { 00:22:30.695 "name": null, 00:22:30.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.695 "is_configured": false, 00:22:30.695 "data_offset": 2048, 00:22:30.695 "data_size": 63488 00:22:30.695 }, 00:22:30.695 { 00:22:30.695 "name": "BaseBdev3", 00:22:30.695 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:30.695 "is_configured": true, 00:22:30.695 "data_offset": 2048, 00:22:30.695 "data_size": 63488 00:22:30.695 }, 00:22:30.695 { 00:22:30.695 "name": "BaseBdev4", 00:22:30.695 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:30.695 "is_configured": true, 00:22:30.695 "data_offset": 2048, 00:22:30.695 "data_size": 63488 00:22:30.695 } 00:22:30.695 ] 00:22:30.695 }' 00:22:30.695 01:05:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.695 01:05:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.695 01:05:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.695 01:05:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.695 01:05:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:31.272 [2024-11-18 01:05:05.407732] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:31.272 [2024-11-18 01:05:05.623995] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.842 01:05:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.102 01:05:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.102 "name": "raid_bdev1", 00:22:32.102 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:32.102 "strip_size_kb": 0, 00:22:32.102 "state": "online", 00:22:32.102 "raid_level": "raid1", 00:22:32.102 "superblock": true, 00:22:32.102 "num_base_bdevs": 4, 00:22:32.102 "num_base_bdevs_discovered": 3, 00:22:32.102 "num_base_bdevs_operational": 3, 00:22:32.102 "process": { 00:22:32.102 "type": "rebuild", 00:22:32.102 "target": "spare", 00:22:32.102 "progress": { 00:22:32.102 "blocks": 57344, 00:22:32.102 "percent": 90 00:22:32.102 } 00:22:32.102 }, 00:22:32.102 "base_bdevs_list": [ 00:22:32.102 { 00:22:32.102 "name": "spare", 00:22:32.102 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:32.102 "is_configured": true, 00:22:32.102 "data_offset": 2048, 00:22:32.102 "data_size": 63488 00:22:32.102 }, 00:22:32.102 { 00:22:32.102 "name": null, 00:22:32.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.102 "is_configured": false, 00:22:32.102 "data_offset": 2048, 00:22:32.102 "data_size": 63488 00:22:32.102 }, 00:22:32.102 { 00:22:32.102 "name": "BaseBdev3", 00:22:32.102 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:32.102 "is_configured": true, 00:22:32.102 "data_offset": 2048, 00:22:32.102 "data_size": 63488 00:22:32.102 }, 00:22:32.102 { 00:22:32.102 "name": "BaseBdev4", 00:22:32.102 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:32.102 "is_configured": true, 00:22:32.102 "data_offset": 2048, 00:22:32.102 "data_size": 63488 00:22:32.102 } 00:22:32.102 ] 00:22:32.102 }' 00:22:32.102 01:05:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.102 01:05:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.102 01:05:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.102 01:05:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.102 01:05:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:32.102 [2024-11-18 01:05:06.496787] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:32.361 [2024-11-18 01:05:06.599831] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:32.361 [2024-11-18 01:05:06.605079] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.296 01:05:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.296 "name": "raid_bdev1", 00:22:33.296 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:33.296 "strip_size_kb": 0, 00:22:33.296 "state": "online", 00:22:33.296 "raid_level": "raid1", 00:22:33.296 "superblock": true, 00:22:33.296 "num_base_bdevs": 4, 00:22:33.296 "num_base_bdevs_discovered": 3, 00:22:33.296 "num_base_bdevs_operational": 3, 00:22:33.296 "base_bdevs_list": [ 00:22:33.296 { 00:22:33.296 "name": "spare", 00:22:33.296 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:33.296 "is_configured": true, 00:22:33.296 "data_offset": 2048, 00:22:33.296 "data_size": 63488 00:22:33.296 }, 00:22:33.296 { 00:22:33.296 "name": null, 00:22:33.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.296 "is_configured": false, 00:22:33.296 "data_offset": 2048, 00:22:33.296 "data_size": 63488 00:22:33.296 }, 00:22:33.296 { 00:22:33.296 "name": "BaseBdev3", 00:22:33.296 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:33.296 "is_configured": true, 00:22:33.296 "data_offset": 2048, 00:22:33.297 "data_size": 63488 00:22:33.297 }, 00:22:33.297 { 00:22:33.297 "name": "BaseBdev4", 00:22:33.297 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:33.297 "is_configured": true, 00:22:33.297 "data_offset": 2048, 00:22:33.297 "data_size": 63488 00:22:33.297 } 00:22:33.297 ] 00:22:33.297 }' 00:22:33.297 01:05:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.297 01:05:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:33.297 01:05:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@660 -- # break 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.558 01:05:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.835 01:05:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.835 "name": "raid_bdev1", 00:22:33.835 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:33.835 "strip_size_kb": 0, 00:22:33.835 "state": "online", 00:22:33.835 "raid_level": "raid1", 00:22:33.835 "superblock": true, 00:22:33.835 "num_base_bdevs": 4, 00:22:33.835 "num_base_bdevs_discovered": 3, 00:22:33.835 "num_base_bdevs_operational": 3, 00:22:33.835 "base_bdevs_list": [ 00:22:33.835 { 00:22:33.835 "name": "spare", 00:22:33.835 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:33.835 "is_configured": true, 00:22:33.835 "data_offset": 2048, 00:22:33.835 "data_size": 63488 00:22:33.835 }, 00:22:33.835 { 00:22:33.835 "name": null, 00:22:33.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.835 "is_configured": false, 00:22:33.835 "data_offset": 2048, 00:22:33.835 "data_size": 63488 00:22:33.835 }, 00:22:33.835 { 00:22:33.835 "name": "BaseBdev3", 00:22:33.835 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:33.835 "is_configured": true, 00:22:33.835 "data_offset": 2048, 00:22:33.835 "data_size": 63488 00:22:33.835 }, 00:22:33.835 { 00:22:33.835 "name": "BaseBdev4", 00:22:33.835 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:33.835 "is_configured": true, 00:22:33.835 "data_offset": 2048, 00:22:33.835 "data_size": 63488 00:22:33.835 } 00:22:33.835 ] 00:22:33.835 }' 00:22:33.835 01:05:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.835 01:05:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.103 01:05:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.103 "name": "raid_bdev1", 00:22:34.103 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:34.103 "strip_size_kb": 0, 00:22:34.103 "state": "online", 00:22:34.103 "raid_level": "raid1", 00:22:34.103 "superblock": true, 00:22:34.103 "num_base_bdevs": 4, 00:22:34.103 "num_base_bdevs_discovered": 3, 00:22:34.103 "num_base_bdevs_operational": 3, 00:22:34.103 "base_bdevs_list": [ 00:22:34.103 { 00:22:34.103 "name": "spare", 00:22:34.103 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:34.103 "is_configured": true, 00:22:34.103 "data_offset": 2048, 00:22:34.103 "data_size": 63488 00:22:34.103 }, 00:22:34.103 { 00:22:34.103 "name": null, 00:22:34.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.103 "is_configured": false, 00:22:34.103 "data_offset": 2048, 00:22:34.103 "data_size": 63488 00:22:34.103 }, 00:22:34.104 { 00:22:34.104 "name": "BaseBdev3", 00:22:34.104 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:34.104 "is_configured": true, 00:22:34.104 "data_offset": 2048, 00:22:34.104 "data_size": 63488 00:22:34.104 }, 00:22:34.104 { 00:22:34.104 "name": "BaseBdev4", 00:22:34.104 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:34.104 "is_configured": true, 00:22:34.104 "data_offset": 2048, 00:22:34.104 "data_size": 63488 00:22:34.104 } 00:22:34.104 ] 00:22:34.104 }' 00:22:34.104 01:05:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.104 01:05:08 -- common/autotest_common.sh@10 -- # set +x 00:22:34.669 01:05:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:34.928 [2024-11-18 01:05:09.234393] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:34.928 [2024-11-18 01:05:09.234729] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.185 00:22:35.186 Latency(us) 00:22:35.186 [2024-11-18T01:05:09.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.186 [2024-11-18T01:05:09.585Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:35.186 raid_bdev1 : 11.33 95.11 285.32 0.00 0.00 15583.53 300.37 111848.11 00:22:35.186 [2024-11-18T01:05:09.585Z] =================================================================================================================== 00:22:35.186 [2024-11-18T01:05:09.585Z] Total : 95.11 285.32 0.00 0.00 15583.53 300.37 111848.11 00:22:35.186 [2024-11-18 01:05:09.339609] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.186 [2024-11-18 01:05:09.339810] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.186 [2024-11-18 01:05:09.339967] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.186 [2024-11-18 01:05:09.340062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:35.186 0 00:22:35.186 01:05:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:35.186 01:05:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.444 01:05:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:35.444 01:05:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:35.444 01:05:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.444 01:05:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:35.702 /dev/nbd0 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:35.703 01:05:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:35.703 01:05:09 -- common/autotest_common.sh@867 -- # local i 00:22:35.703 01:05:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:35.703 01:05:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:35.703 01:05:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:35.703 01:05:09 -- common/autotest_common.sh@871 -- # break 00:22:35.703 01:05:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:35.703 01:05:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:35.703 01:05:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.703 1+0 records in 00:22:35.703 1+0 records out 00:22:35.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778854 s, 5.3 MB/s 00:22:35.703 01:05:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.703 01:05:09 -- common/autotest_common.sh@884 -- # size=4096 00:22:35.703 01:05:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.703 01:05:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:35.703 01:05:09 -- common/autotest_common.sh@887 -- # return 0 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.703 01:05:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:35.703 01:05:09 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:35.703 01:05:09 -- bdev/bdev_raid.sh@678 -- # continue 00:22:35.703 01:05:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:35.703 01:05:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:35.703 01:05:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.703 01:05:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:35.961 /dev/nbd1 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:35.961 01:05:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:35.961 01:05:10 -- common/autotest_common.sh@867 -- # local i 00:22:35.961 01:05:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:35.961 01:05:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:35.961 01:05:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:35.961 01:05:10 -- common/autotest_common.sh@871 -- # break 00:22:35.961 01:05:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:35.961 01:05:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:35.961 01:05:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.961 1+0 records in 00:22:35.961 1+0 records out 00:22:35.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535263 s, 7.7 MB/s 00:22:35.961 01:05:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.961 01:05:10 -- common/autotest_common.sh@884 -- # size=4096 00:22:35.961 01:05:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.961 01:05:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:35.961 01:05:10 -- common/autotest_common.sh@887 -- # return 0 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.961 01:05:10 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:35.961 01:05:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@51 -- # local i 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:35.961 01:05:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:36.220 01:05:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@41 -- # break 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.478 01:05:10 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:36.478 01:05:10 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:36.478 01:05:10 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.478 01:05:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:36.479 01:05:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.479 01:05:10 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.479 01:05:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.479 01:05:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.479 01:05:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:36.737 /dev/nbd1 00:22:36.737 01:05:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.737 01:05:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.737 01:05:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:36.737 01:05:10 -- common/autotest_common.sh@867 -- # local i 00:22:36.737 01:05:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:36.737 01:05:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:36.737 01:05:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:36.737 01:05:10 -- common/autotest_common.sh@871 -- # break 00:22:36.737 01:05:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:36.737 01:05:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:36.737 01:05:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.737 1+0 records in 00:22:36.737 1+0 records out 00:22:36.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414128 s, 9.9 MB/s 00:22:36.737 01:05:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.737 01:05:10 -- common/autotest_common.sh@884 -- # size=4096 00:22:36.737 01:05:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.737 01:05:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:36.737 01:05:10 -- common/autotest_common.sh@887 -- # return 0 00:22:36.737 01:05:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.737 01:05:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.737 01:05:10 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:36.737 01:05:11 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:36.737 01:05:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.737 01:05:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:36.737 01:05:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.737 01:05:11 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.737 01:05:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.737 01:05:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@41 -- # break 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.995 01:05:11 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.995 01:05:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@41 -- # break 00:22:37.254 01:05:11 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.254 01:05:11 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:37.254 01:05:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.254 01:05:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:37.254 01:05:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:37.512 01:05:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:37.771 [2024-11-18 01:05:12.116207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:37.771 [2024-11-18 01:05:12.116328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.771 [2024-11-18 01:05:12.116374] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:37.771 [2024-11-18 01:05:12.116398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.771 [2024-11-18 01:05:12.119152] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.771 [2024-11-18 01:05:12.119225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:37.771 [2024-11-18 01:05:12.119327] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:37.771 [2024-11-18 01:05:12.119408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.771 BaseBdev1 00:22:37.771 01:05:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.771 01:05:12 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:37.771 01:05:12 -- bdev/bdev_raid.sh@696 -- # continue 00:22:37.771 01:05:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.771 01:05:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:37.771 01:05:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:38.029 01:05:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:38.288 [2024-11-18 01:05:12.516308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:38.288 [2024-11-18 01:05:12.516407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.288 [2024-11-18 01:05:12.516454] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:38.288 [2024-11-18 01:05:12.516480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.288 [2024-11-18 01:05:12.516958] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.288 [2024-11-18 01:05:12.517020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:38.288 [2024-11-18 01:05:12.517107] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:38.288 [2024-11-18 01:05:12.517120] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:38.288 [2024-11-18 01:05:12.517128] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.288 [2024-11-18 01:05:12.517161] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:22:38.288 [2024-11-18 01:05:12.517219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:38.288 BaseBdev3 00:22:38.288 01:05:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:38.288 01:05:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:38.288 01:05:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:38.547 01:05:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:38.547 [2024-11-18 01:05:12.900440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:38.547 [2024-11-18 01:05:12.900550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.547 [2024-11-18 01:05:12.900599] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:38.547 [2024-11-18 01:05:12.900633] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.547 [2024-11-18 01:05:12.901110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.547 [2024-11-18 01:05:12.901171] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:38.547 [2024-11-18 01:05:12.901262] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:38.547 [2024-11-18 01:05:12.901302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:38.547 BaseBdev4 00:22:38.547 01:05:12 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:38.806 01:05:13 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:39.064 [2024-11-18 01:05:13.284558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:39.064 [2024-11-18 01:05:13.284666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.064 [2024-11-18 01:05:13.284705] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:39.064 [2024-11-18 01:05:13.284734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.064 [2024-11-18 01:05:13.285245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.064 [2024-11-18 01:05:13.285305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:39.064 [2024-11-18 01:05:13.285409] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:39.064 [2024-11-18 01:05:13.285448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.064 spare 00:22:39.064 01:05:13 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:39.064 01:05:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.064 01:05:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:39.064 01:05:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.064 01:05:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.064 01:05:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.065 01:05:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.065 01:05:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.065 01:05:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.065 01:05:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.065 01:05:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.065 01:05:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.065 [2024-11-18 01:05:13.385577] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:22:39.065 [2024-11-18 01:05:13.385619] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:39.065 [2024-11-18 01:05:13.385835] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033bc0 00:22:39.065 [2024-11-18 01:05:13.386343] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:22:39.065 [2024-11-18 01:05:13.386363] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:22:39.065 [2024-11-18 01:05:13.386501] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.323 01:05:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.323 "name": "raid_bdev1", 00:22:39.323 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:39.323 "strip_size_kb": 0, 00:22:39.323 "state": "online", 00:22:39.323 "raid_level": "raid1", 00:22:39.323 "superblock": true, 00:22:39.323 "num_base_bdevs": 4, 00:22:39.323 "num_base_bdevs_discovered": 3, 00:22:39.323 "num_base_bdevs_operational": 3, 00:22:39.323 "base_bdevs_list": [ 00:22:39.323 { 00:22:39.323 "name": "spare", 00:22:39.323 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:39.323 "is_configured": true, 00:22:39.323 "data_offset": 2048, 00:22:39.323 "data_size": 63488 00:22:39.323 }, 00:22:39.323 { 00:22:39.323 "name": null, 00:22:39.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.323 "is_configured": false, 00:22:39.323 "data_offset": 2048, 00:22:39.323 "data_size": 63488 00:22:39.323 }, 00:22:39.323 { 00:22:39.323 "name": "BaseBdev3", 00:22:39.323 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:39.323 "is_configured": true, 00:22:39.323 "data_offset": 2048, 00:22:39.323 "data_size": 63488 00:22:39.323 }, 00:22:39.323 { 00:22:39.323 "name": "BaseBdev4", 00:22:39.323 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:39.323 "is_configured": true, 00:22:39.323 "data_offset": 2048, 00:22:39.323 "data_size": 63488 00:22:39.323 } 00:22:39.323 ] 00:22:39.323 }' 00:22:39.323 01:05:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.323 01:05:13 -- common/autotest_common.sh@10 -- # set +x 00:22:39.890 01:05:14 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.890 01:05:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.890 01:05:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:39.890 01:05:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:39.890 01:05:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.890 01:05:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.890 01:05:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.148 01:05:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.148 "name": "raid_bdev1", 00:22:40.148 "uuid": "2ee50624-8df8-460b-a673-5d30b55c900e", 00:22:40.148 "strip_size_kb": 0, 00:22:40.148 "state": "online", 00:22:40.148 "raid_level": "raid1", 00:22:40.148 "superblock": true, 00:22:40.148 "num_base_bdevs": 4, 00:22:40.148 "num_base_bdevs_discovered": 3, 00:22:40.148 "num_base_bdevs_operational": 3, 00:22:40.148 "base_bdevs_list": [ 00:22:40.148 { 00:22:40.148 "name": "spare", 00:22:40.148 "uuid": "a75f9b8d-87b2-57d0-ab06-3309e7f8fdcd", 00:22:40.148 "is_configured": true, 00:22:40.148 "data_offset": 2048, 00:22:40.148 "data_size": 63488 00:22:40.148 }, 00:22:40.148 { 00:22:40.148 "name": null, 00:22:40.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.148 "is_configured": false, 00:22:40.148 "data_offset": 2048, 00:22:40.148 "data_size": 63488 00:22:40.148 }, 00:22:40.148 { 00:22:40.148 "name": "BaseBdev3", 00:22:40.148 "uuid": "42e407a6-c545-5896-bd9a-c8acfb85b7df", 00:22:40.148 "is_configured": true, 00:22:40.148 "data_offset": 2048, 00:22:40.148 "data_size": 63488 00:22:40.148 }, 00:22:40.148 { 00:22:40.148 "name": "BaseBdev4", 00:22:40.148 "uuid": "1b18de41-fd93-5793-8f8d-daacf43c9da4", 00:22:40.148 "is_configured": true, 00:22:40.148 "data_offset": 2048, 00:22:40.148 "data_size": 63488 00:22:40.148 } 00:22:40.148 ] 00:22:40.148 }' 00:22:40.148 01:05:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.148 01:05:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:40.148 01:05:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.148 01:05:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:40.148 01:05:14 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.148 01:05:14 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:40.407 01:05:14 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.407 01:05:14 -- bdev/bdev_raid.sh@709 -- # killprocess 136861 00:22:40.407 01:05:14 -- common/autotest_common.sh@936 -- # '[' -z 136861 ']' 00:22:40.407 01:05:14 -- common/autotest_common.sh@940 -- # kill -0 136861 00:22:40.407 01:05:14 -- common/autotest_common.sh@941 -- # uname 00:22:40.407 01:05:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:40.407 01:05:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136861 00:22:40.407 01:05:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:40.407 01:05:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:40.407 01:05:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136861' 00:22:40.407 killing process with pid 136861 00:22:40.407 01:05:14 -- common/autotest_common.sh@955 -- # kill 136861 00:22:40.407 Received shutdown signal, test time was about 16.692600 seconds 00:22:40.407 00:22:40.407 Latency(us) 00:22:40.407 [2024-11-18T01:05:14.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.407 [2024-11-18T01:05:14.806Z] =================================================================================================================== 00:22:40.407 [2024-11-18T01:05:14.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.407 [2024-11-18 01:05:14.692374] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.407 01:05:14 -- common/autotest_common.sh@960 -- # wait 136861 00:22:40.407 [2024-11-18 01:05:14.692495] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.407 [2024-11-18 01:05:14.692595] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.407 [2024-11-18 01:05:14.692606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:22:40.407 [2024-11-18 01:05:14.778843] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:40.975 00:22:40.975 real 0m22.189s 00:22:40.975 user 0m34.969s 00:22:40.975 sys 0m3.864s 00:22:40.975 01:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:40.975 01:05:15 -- common/autotest_common.sh@10 -- # set +x 00:22:40.975 ************************************ 00:22:40.975 END TEST raid_rebuild_test_sb_io 00:22:40.975 ************************************ 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:40.975 01:05:15 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:40.975 01:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:40.975 01:05:15 -- common/autotest_common.sh@10 -- # set +x 00:22:40.975 ************************************ 00:22:40.975 START TEST raid5f_state_function_test 00:22:40.975 ************************************ 00:22:40.975 01:05:15 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=137461 00:22:40.975 Process raid pid: 137461 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137461' 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137461 /var/tmp/spdk-raid.sock 00:22:40.975 01:05:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:40.975 01:05:15 -- common/autotest_common.sh@829 -- # '[' -z 137461 ']' 00:22:40.975 01:05:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:40.975 01:05:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.975 01:05:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:40.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:40.975 01:05:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.975 01:05:15 -- common/autotest_common.sh@10 -- # set +x 00:22:40.975 [2024-11-18 01:05:15.339264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:40.975 [2024-11-18 01:05:15.339484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.233 [2024-11-18 01:05:15.485679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.233 [2024-11-18 01:05:15.565646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.491 [2024-11-18 01:05:15.643879] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.058 01:05:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.058 01:05:16 -- common/autotest_common.sh@862 -- # return 0 00:22:42.058 01:05:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:42.316 [2024-11-18 01:05:16.568317] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:42.316 [2024-11-18 01:05:16.568433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:42.316 [2024-11-18 01:05:16.568446] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:42.316 [2024-11-18 01:05:16.568467] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:42.316 [2024-11-18 01:05:16.568473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:42.316 [2024-11-18 01:05:16.568523] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.316 01:05:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.574 01:05:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.574 "name": "Existed_Raid", 00:22:42.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.574 "strip_size_kb": 64, 00:22:42.574 "state": "configuring", 00:22:42.574 "raid_level": "raid5f", 00:22:42.574 "superblock": false, 00:22:42.574 "num_base_bdevs": 3, 00:22:42.574 "num_base_bdevs_discovered": 0, 00:22:42.574 "num_base_bdevs_operational": 3, 00:22:42.574 "base_bdevs_list": [ 00:22:42.574 { 00:22:42.575 "name": "BaseBdev1", 00:22:42.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.575 "is_configured": false, 00:22:42.575 "data_offset": 0, 00:22:42.575 "data_size": 0 00:22:42.575 }, 00:22:42.575 { 00:22:42.575 "name": "BaseBdev2", 00:22:42.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.575 "is_configured": false, 00:22:42.575 "data_offset": 0, 00:22:42.575 "data_size": 0 00:22:42.575 }, 00:22:42.575 { 00:22:42.575 "name": "BaseBdev3", 00:22:42.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.575 "is_configured": false, 00:22:42.575 "data_offset": 0, 00:22:42.575 "data_size": 0 00:22:42.575 } 00:22:42.575 ] 00:22:42.575 }' 00:22:42.575 01:05:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.575 01:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:43.140 01:05:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:43.140 [2024-11-18 01:05:17.448336] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:43.140 [2024-11-18 01:05:17.448392] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:43.140 01:05:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:43.398 [2024-11-18 01:05:17.640396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:43.398 [2024-11-18 01:05:17.640487] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:43.398 [2024-11-18 01:05:17.640497] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:43.398 [2024-11-18 01:05:17.640521] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:43.398 [2024-11-18 01:05:17.640527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:43.398 [2024-11-18 01:05:17.640553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:43.398 01:05:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:43.656 [2024-11-18 01:05:17.848509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.656 BaseBdev1 00:22:43.656 01:05:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:43.656 01:05:17 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:43.656 01:05:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:43.656 01:05:17 -- common/autotest_common.sh@899 -- # local i 00:22:43.656 01:05:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:43.656 01:05:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:43.656 01:05:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:43.914 01:05:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:43.914 [ 00:22:43.914 { 00:22:43.914 "name": "BaseBdev1", 00:22:43.914 "aliases": [ 00:22:43.914 "7292d580-1771-4e24-aac4-e06f21262fdd" 00:22:43.914 ], 00:22:43.914 "product_name": "Malloc disk", 00:22:43.914 "block_size": 512, 00:22:43.914 "num_blocks": 65536, 00:22:43.914 "uuid": "7292d580-1771-4e24-aac4-e06f21262fdd", 00:22:43.914 "assigned_rate_limits": { 00:22:43.914 "rw_ios_per_sec": 0, 00:22:43.914 "rw_mbytes_per_sec": 0, 00:22:43.914 "r_mbytes_per_sec": 0, 00:22:43.914 "w_mbytes_per_sec": 0 00:22:43.914 }, 00:22:43.914 "claimed": true, 00:22:43.914 "claim_type": "exclusive_write", 00:22:43.914 "zoned": false, 00:22:43.914 "supported_io_types": { 00:22:43.914 "read": true, 00:22:43.914 "write": true, 00:22:43.914 "unmap": true, 00:22:43.914 "write_zeroes": true, 00:22:43.914 "flush": true, 00:22:43.914 "reset": true, 00:22:43.914 "compare": false, 00:22:43.914 "compare_and_write": false, 00:22:43.914 "abort": true, 00:22:43.914 "nvme_admin": false, 00:22:43.914 "nvme_io": false 00:22:43.914 }, 00:22:43.914 "memory_domains": [ 00:22:43.914 { 00:22:43.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.914 "dma_device_type": 2 00:22:43.914 } 00:22:43.914 ], 00:22:43.914 "driver_specific": {} 00:22:43.914 } 00:22:43.914 ] 00:22:43.914 01:05:18 -- common/autotest_common.sh@905 -- # return 0 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.914 01:05:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.172 01:05:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.172 "name": "Existed_Raid", 00:22:44.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.172 "strip_size_kb": 64, 00:22:44.172 "state": "configuring", 00:22:44.172 "raid_level": "raid5f", 00:22:44.172 "superblock": false, 00:22:44.172 "num_base_bdevs": 3, 00:22:44.172 "num_base_bdevs_discovered": 1, 00:22:44.172 "num_base_bdevs_operational": 3, 00:22:44.172 "base_bdevs_list": [ 00:22:44.172 { 00:22:44.172 "name": "BaseBdev1", 00:22:44.172 "uuid": "7292d580-1771-4e24-aac4-e06f21262fdd", 00:22:44.172 "is_configured": true, 00:22:44.172 "data_offset": 0, 00:22:44.172 "data_size": 65536 00:22:44.172 }, 00:22:44.172 { 00:22:44.172 "name": "BaseBdev2", 00:22:44.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.172 "is_configured": false, 00:22:44.172 "data_offset": 0, 00:22:44.172 "data_size": 0 00:22:44.172 }, 00:22:44.172 { 00:22:44.172 "name": "BaseBdev3", 00:22:44.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.172 "is_configured": false, 00:22:44.172 "data_offset": 0, 00:22:44.172 "data_size": 0 00:22:44.172 } 00:22:44.172 ] 00:22:44.172 }' 00:22:44.172 01:05:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.172 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:22:44.738 01:05:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:44.995 [2024-11-18 01:05:19.300769] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:44.995 [2024-11-18 01:05:19.300843] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:44.995 01:05:19 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:44.995 01:05:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:45.253 [2024-11-18 01:05:19.492942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:45.253 [2024-11-18 01:05:19.495425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:45.253 [2024-11-18 01:05:19.495497] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:45.253 [2024-11-18 01:05:19.495507] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:45.253 [2024-11-18 01:05:19.495533] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.253 01:05:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.511 01:05:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.511 "name": "Existed_Raid", 00:22:45.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.511 "strip_size_kb": 64, 00:22:45.511 "state": "configuring", 00:22:45.511 "raid_level": "raid5f", 00:22:45.511 "superblock": false, 00:22:45.511 "num_base_bdevs": 3, 00:22:45.511 "num_base_bdevs_discovered": 1, 00:22:45.511 "num_base_bdevs_operational": 3, 00:22:45.511 "base_bdevs_list": [ 00:22:45.511 { 00:22:45.511 "name": "BaseBdev1", 00:22:45.511 "uuid": "7292d580-1771-4e24-aac4-e06f21262fdd", 00:22:45.511 "is_configured": true, 00:22:45.511 "data_offset": 0, 00:22:45.511 "data_size": 65536 00:22:45.511 }, 00:22:45.511 { 00:22:45.511 "name": "BaseBdev2", 00:22:45.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.511 "is_configured": false, 00:22:45.511 "data_offset": 0, 00:22:45.511 "data_size": 0 00:22:45.511 }, 00:22:45.511 { 00:22:45.511 "name": "BaseBdev3", 00:22:45.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.511 "is_configured": false, 00:22:45.511 "data_offset": 0, 00:22:45.511 "data_size": 0 00:22:45.511 } 00:22:45.511 ] 00:22:45.511 }' 00:22:45.511 01:05:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.511 01:05:19 -- common/autotest_common.sh@10 -- # set +x 00:22:46.077 01:05:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:46.335 [2024-11-18 01:05:20.579640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:46.335 BaseBdev2 00:22:46.335 01:05:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:46.335 01:05:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:46.335 01:05:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:46.335 01:05:20 -- common/autotest_common.sh@899 -- # local i 00:22:46.335 01:05:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:46.335 01:05:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:46.335 01:05:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.594 01:05:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:46.852 [ 00:22:46.852 { 00:22:46.852 "name": "BaseBdev2", 00:22:46.852 "aliases": [ 00:22:46.852 "98d2385d-e34a-41bf-95a6-15fdded60a03" 00:22:46.852 ], 00:22:46.852 "product_name": "Malloc disk", 00:22:46.852 "block_size": 512, 00:22:46.852 "num_blocks": 65536, 00:22:46.852 "uuid": "98d2385d-e34a-41bf-95a6-15fdded60a03", 00:22:46.852 "assigned_rate_limits": { 00:22:46.852 "rw_ios_per_sec": 0, 00:22:46.852 "rw_mbytes_per_sec": 0, 00:22:46.852 "r_mbytes_per_sec": 0, 00:22:46.852 "w_mbytes_per_sec": 0 00:22:46.852 }, 00:22:46.852 "claimed": true, 00:22:46.852 "claim_type": "exclusive_write", 00:22:46.852 "zoned": false, 00:22:46.852 "supported_io_types": { 00:22:46.852 "read": true, 00:22:46.852 "write": true, 00:22:46.852 "unmap": true, 00:22:46.852 "write_zeroes": true, 00:22:46.852 "flush": true, 00:22:46.852 "reset": true, 00:22:46.852 "compare": false, 00:22:46.852 "compare_and_write": false, 00:22:46.852 "abort": true, 00:22:46.852 "nvme_admin": false, 00:22:46.852 "nvme_io": false 00:22:46.852 }, 00:22:46.852 "memory_domains": [ 00:22:46.852 { 00:22:46.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.852 "dma_device_type": 2 00:22:46.852 } 00:22:46.852 ], 00:22:46.852 "driver_specific": {} 00:22:46.852 } 00:22:46.852 ] 00:22:46.852 01:05:21 -- common/autotest_common.sh@905 -- # return 0 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.852 "name": "Existed_Raid", 00:22:46.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.852 "strip_size_kb": 64, 00:22:46.852 "state": "configuring", 00:22:46.852 "raid_level": "raid5f", 00:22:46.852 "superblock": false, 00:22:46.852 "num_base_bdevs": 3, 00:22:46.852 "num_base_bdevs_discovered": 2, 00:22:46.852 "num_base_bdevs_operational": 3, 00:22:46.852 "base_bdevs_list": [ 00:22:46.852 { 00:22:46.852 "name": "BaseBdev1", 00:22:46.852 "uuid": "7292d580-1771-4e24-aac4-e06f21262fdd", 00:22:46.852 "is_configured": true, 00:22:46.852 "data_offset": 0, 00:22:46.852 "data_size": 65536 00:22:46.852 }, 00:22:46.852 { 00:22:46.852 "name": "BaseBdev2", 00:22:46.852 "uuid": "98d2385d-e34a-41bf-95a6-15fdded60a03", 00:22:46.852 "is_configured": true, 00:22:46.852 "data_offset": 0, 00:22:46.852 "data_size": 65536 00:22:46.852 }, 00:22:46.852 { 00:22:46.852 "name": "BaseBdev3", 00:22:46.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.852 "is_configured": false, 00:22:46.852 "data_offset": 0, 00:22:46.852 "data_size": 0 00:22:46.852 } 00:22:46.852 ] 00:22:46.852 }' 00:22:46.852 01:05:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.852 01:05:21 -- common/autotest_common.sh@10 -- # set +x 00:22:47.786 01:05:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:47.786 [2024-11-18 01:05:22.121546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:47.786 [2024-11-18 01:05:22.121624] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:22:47.786 [2024-11-18 01:05:22.121634] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:47.786 [2024-11-18 01:05:22.121781] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:22:47.786 [2024-11-18 01:05:22.122678] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:22:47.786 [2024-11-18 01:05:22.122700] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:22:47.786 [2024-11-18 01:05:22.122964] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.786 BaseBdev3 00:22:47.786 01:05:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:47.786 01:05:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:47.786 01:05:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:47.786 01:05:22 -- common/autotest_common.sh@899 -- # local i 00:22:47.786 01:05:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:47.786 01:05:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:47.786 01:05:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:48.044 01:05:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:48.315 [ 00:22:48.315 { 00:22:48.315 "name": "BaseBdev3", 00:22:48.315 "aliases": [ 00:22:48.315 "49888283-7ec1-48c2-a40f-16bc83029396" 00:22:48.315 ], 00:22:48.315 "product_name": "Malloc disk", 00:22:48.315 "block_size": 512, 00:22:48.315 "num_blocks": 65536, 00:22:48.315 "uuid": "49888283-7ec1-48c2-a40f-16bc83029396", 00:22:48.315 "assigned_rate_limits": { 00:22:48.315 "rw_ios_per_sec": 0, 00:22:48.315 "rw_mbytes_per_sec": 0, 00:22:48.315 "r_mbytes_per_sec": 0, 00:22:48.315 "w_mbytes_per_sec": 0 00:22:48.315 }, 00:22:48.315 "claimed": true, 00:22:48.315 "claim_type": "exclusive_write", 00:22:48.315 "zoned": false, 00:22:48.315 "supported_io_types": { 00:22:48.315 "read": true, 00:22:48.315 "write": true, 00:22:48.315 "unmap": true, 00:22:48.315 "write_zeroes": true, 00:22:48.315 "flush": true, 00:22:48.315 "reset": true, 00:22:48.315 "compare": false, 00:22:48.315 "compare_and_write": false, 00:22:48.315 "abort": true, 00:22:48.315 "nvme_admin": false, 00:22:48.315 "nvme_io": false 00:22:48.315 }, 00:22:48.315 "memory_domains": [ 00:22:48.315 { 00:22:48.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.315 "dma_device_type": 2 00:22:48.315 } 00:22:48.315 ], 00:22:48.315 "driver_specific": {} 00:22:48.315 } 00:22:48.315 ] 00:22:48.315 01:05:22 -- common/autotest_common.sh@905 -- # return 0 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.315 01:05:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.586 01:05:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.586 "name": "Existed_Raid", 00:22:48.586 "uuid": "6670b876-70fc-4271-aa1f-82fa7c795f60", 00:22:48.586 "strip_size_kb": 64, 00:22:48.586 "state": "online", 00:22:48.586 "raid_level": "raid5f", 00:22:48.586 "superblock": false, 00:22:48.586 "num_base_bdevs": 3, 00:22:48.586 "num_base_bdevs_discovered": 3, 00:22:48.586 "num_base_bdevs_operational": 3, 00:22:48.586 "base_bdevs_list": [ 00:22:48.586 { 00:22:48.586 "name": "BaseBdev1", 00:22:48.586 "uuid": "7292d580-1771-4e24-aac4-e06f21262fdd", 00:22:48.586 "is_configured": true, 00:22:48.586 "data_offset": 0, 00:22:48.586 "data_size": 65536 00:22:48.586 }, 00:22:48.586 { 00:22:48.586 "name": "BaseBdev2", 00:22:48.586 "uuid": "98d2385d-e34a-41bf-95a6-15fdded60a03", 00:22:48.586 "is_configured": true, 00:22:48.586 "data_offset": 0, 00:22:48.586 "data_size": 65536 00:22:48.586 }, 00:22:48.586 { 00:22:48.586 "name": "BaseBdev3", 00:22:48.586 "uuid": "49888283-7ec1-48c2-a40f-16bc83029396", 00:22:48.586 "is_configured": true, 00:22:48.586 "data_offset": 0, 00:22:48.586 "data_size": 65536 00:22:48.586 } 00:22:48.586 ] 00:22:48.586 }' 00:22:48.586 01:05:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.586 01:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:49.153 01:05:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:49.412 [2024-11-18 01:05:23.746029] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.412 01:05:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.670 01:05:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.670 "name": "Existed_Raid", 00:22:49.670 "uuid": "6670b876-70fc-4271-aa1f-82fa7c795f60", 00:22:49.670 "strip_size_kb": 64, 00:22:49.670 "state": "online", 00:22:49.670 "raid_level": "raid5f", 00:22:49.670 "superblock": false, 00:22:49.670 "num_base_bdevs": 3, 00:22:49.670 "num_base_bdevs_discovered": 2, 00:22:49.670 "num_base_bdevs_operational": 2, 00:22:49.670 "base_bdevs_list": [ 00:22:49.670 { 00:22:49.670 "name": null, 00:22:49.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.670 "is_configured": false, 00:22:49.670 "data_offset": 0, 00:22:49.670 "data_size": 65536 00:22:49.670 }, 00:22:49.670 { 00:22:49.670 "name": "BaseBdev2", 00:22:49.671 "uuid": "98d2385d-e34a-41bf-95a6-15fdded60a03", 00:22:49.671 "is_configured": true, 00:22:49.671 "data_offset": 0, 00:22:49.671 "data_size": 65536 00:22:49.671 }, 00:22:49.671 { 00:22:49.671 "name": "BaseBdev3", 00:22:49.671 "uuid": "49888283-7ec1-48c2-a40f-16bc83029396", 00:22:49.671 "is_configured": true, 00:22:49.671 "data_offset": 0, 00:22:49.671 "data_size": 65536 00:22:49.671 } 00:22:49.671 ] 00:22:49.671 }' 00:22:49.671 01:05:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.671 01:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:50.236 01:05:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:50.236 01:05:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:50.236 01:05:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.236 01:05:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:50.494 01:05:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:50.494 01:05:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:50.494 01:05:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:50.753 [2024-11-18 01:05:25.031838] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:50.753 [2024-11-18 01:05:25.032151] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:50.753 [2024-11-18 01:05:25.032354] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.753 01:05:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:50.753 01:05:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:50.753 01:05:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.753 01:05:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:51.096 01:05:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:51.096 01:05:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:51.096 01:05:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:51.096 [2024-11-18 01:05:25.433339] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:51.096 [2024-11-18 01:05:25.433720] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:22:51.096 01:05:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:51.096 01:05:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:51.096 01:05:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:51.096 01:05:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.354 01:05:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:51.354 01:05:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:51.354 01:05:25 -- bdev/bdev_raid.sh@287 -- # killprocess 137461 00:22:51.354 01:05:25 -- common/autotest_common.sh@936 -- # '[' -z 137461 ']' 00:22:51.354 01:05:25 -- common/autotest_common.sh@940 -- # kill -0 137461 00:22:51.354 01:05:25 -- common/autotest_common.sh@941 -- # uname 00:22:51.354 01:05:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.354 01:05:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137461 00:22:51.354 01:05:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.354 01:05:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.354 01:05:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137461' 00:22:51.354 killing process with pid 137461 00:22:51.354 01:05:25 -- common/autotest_common.sh@955 -- # kill 137461 00:22:51.354 01:05:25 -- common/autotest_common.sh@960 -- # wait 137461 00:22:51.354 [2024-11-18 01:05:25.702417] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:51.354 [2024-11-18 01:05:25.702511] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:51.920 01:05:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:51.920 00:22:51.921 real 0m10.834s 00:22:51.921 user 0m19.075s 00:22:51.921 sys 0m1.888s 00:22:51.921 01:05:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:51.921 01:05:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 ************************************ 00:22:51.921 END TEST raid5f_state_function_test 00:22:51.921 ************************************ 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:51.921 01:05:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:51.921 01:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.921 01:05:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 ************************************ 00:22:51.921 START TEST raid5f_state_function_test_sb 00:22:51.921 ************************************ 00:22:51.921 01:05:26 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=137827 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137827' 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:51.921 Process raid pid: 137827 00:22:51.921 01:05:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137827 /var/tmp/spdk-raid.sock 00:22:51.921 01:05:26 -- common/autotest_common.sh@829 -- # '[' -z 137827 ']' 00:22:51.921 01:05:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:51.921 01:05:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.921 01:05:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:51.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:51.921 01:05:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.921 01:05:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 [2024-11-18 01:05:26.257856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:51.921 [2024-11-18 01:05:26.258268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.179 [2024-11-18 01:05:26.401488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.179 [2024-11-18 01:05:26.481281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.179 [2024-11-18 01:05:26.561648] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:53.115 01:05:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.115 01:05:27 -- common/autotest_common.sh@862 -- # return 0 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:53.115 [2024-11-18 01:05:27.410097] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:53.115 [2024-11-18 01:05:27.410464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:53.115 [2024-11-18 01:05:27.410552] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:53.115 [2024-11-18 01:05:27.410606] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:53.115 [2024-11-18 01:05:27.410632] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:53.115 [2024-11-18 01:05:27.410703] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.115 01:05:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.374 01:05:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.374 "name": "Existed_Raid", 00:22:53.374 "uuid": "d725a73c-552c-44aa-a8fc-6893afc3ac9d", 00:22:53.374 "strip_size_kb": 64, 00:22:53.374 "state": "configuring", 00:22:53.374 "raid_level": "raid5f", 00:22:53.374 "superblock": true, 00:22:53.374 "num_base_bdevs": 3, 00:22:53.374 "num_base_bdevs_discovered": 0, 00:22:53.374 "num_base_bdevs_operational": 3, 00:22:53.374 "base_bdevs_list": [ 00:22:53.374 { 00:22:53.374 "name": "BaseBdev1", 00:22:53.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.374 "is_configured": false, 00:22:53.374 "data_offset": 0, 00:22:53.374 "data_size": 0 00:22:53.374 }, 00:22:53.374 { 00:22:53.374 "name": "BaseBdev2", 00:22:53.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.374 "is_configured": false, 00:22:53.374 "data_offset": 0, 00:22:53.374 "data_size": 0 00:22:53.374 }, 00:22:53.374 { 00:22:53.374 "name": "BaseBdev3", 00:22:53.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.374 "is_configured": false, 00:22:53.374 "data_offset": 0, 00:22:53.374 "data_size": 0 00:22:53.374 } 00:22:53.374 ] 00:22:53.374 }' 00:22:53.375 01:05:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.375 01:05:27 -- common/autotest_common.sh@10 -- # set +x 00:22:53.942 01:05:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:54.201 [2024-11-18 01:05:28.422178] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:54.201 [2024-11-18 01:05:28.422476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:54.201 01:05:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:54.460 [2024-11-18 01:05:28.686291] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:54.460 [2024-11-18 01:05:28.686626] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:54.460 [2024-11-18 01:05:28.686716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:54.460 [2024-11-18 01:05:28.686772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:54.460 [2024-11-18 01:05:28.686798] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:54.460 [2024-11-18 01:05:28.686843] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:54.460 01:05:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:54.718 [2024-11-18 01:05:28.890344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:54.718 BaseBdev1 00:22:54.718 01:05:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:54.718 01:05:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:54.718 01:05:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:54.718 01:05:28 -- common/autotest_common.sh@899 -- # local i 00:22:54.718 01:05:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:54.718 01:05:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:54.718 01:05:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:54.718 01:05:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:54.976 [ 00:22:54.976 { 00:22:54.976 "name": "BaseBdev1", 00:22:54.976 "aliases": [ 00:22:54.976 "55e45169-0b72-4a28-bda2-475b89c64b88" 00:22:54.976 ], 00:22:54.976 "product_name": "Malloc disk", 00:22:54.976 "block_size": 512, 00:22:54.976 "num_blocks": 65536, 00:22:54.976 "uuid": "55e45169-0b72-4a28-bda2-475b89c64b88", 00:22:54.976 "assigned_rate_limits": { 00:22:54.976 "rw_ios_per_sec": 0, 00:22:54.976 "rw_mbytes_per_sec": 0, 00:22:54.976 "r_mbytes_per_sec": 0, 00:22:54.976 "w_mbytes_per_sec": 0 00:22:54.976 }, 00:22:54.976 "claimed": true, 00:22:54.976 "claim_type": "exclusive_write", 00:22:54.976 "zoned": false, 00:22:54.976 "supported_io_types": { 00:22:54.976 "read": true, 00:22:54.976 "write": true, 00:22:54.976 "unmap": true, 00:22:54.976 "write_zeroes": true, 00:22:54.976 "flush": true, 00:22:54.976 "reset": true, 00:22:54.976 "compare": false, 00:22:54.976 "compare_and_write": false, 00:22:54.976 "abort": true, 00:22:54.976 "nvme_admin": false, 00:22:54.976 "nvme_io": false 00:22:54.976 }, 00:22:54.976 "memory_domains": [ 00:22:54.976 { 00:22:54.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.976 "dma_device_type": 2 00:22:54.976 } 00:22:54.976 ], 00:22:54.976 "driver_specific": {} 00:22:54.976 } 00:22:54.976 ] 00:22:54.976 01:05:29 -- common/autotest_common.sh@905 -- # return 0 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.976 01:05:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.235 01:05:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.235 "name": "Existed_Raid", 00:22:55.235 "uuid": "cf7378cb-99a3-41a2-b057-fc73283f3ccc", 00:22:55.235 "strip_size_kb": 64, 00:22:55.235 "state": "configuring", 00:22:55.235 "raid_level": "raid5f", 00:22:55.235 "superblock": true, 00:22:55.235 "num_base_bdevs": 3, 00:22:55.235 "num_base_bdevs_discovered": 1, 00:22:55.235 "num_base_bdevs_operational": 3, 00:22:55.235 "base_bdevs_list": [ 00:22:55.235 { 00:22:55.235 "name": "BaseBdev1", 00:22:55.235 "uuid": "55e45169-0b72-4a28-bda2-475b89c64b88", 00:22:55.235 "is_configured": true, 00:22:55.235 "data_offset": 2048, 00:22:55.235 "data_size": 63488 00:22:55.235 }, 00:22:55.235 { 00:22:55.235 "name": "BaseBdev2", 00:22:55.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.235 "is_configured": false, 00:22:55.235 "data_offset": 0, 00:22:55.235 "data_size": 0 00:22:55.235 }, 00:22:55.235 { 00:22:55.235 "name": "BaseBdev3", 00:22:55.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.235 "is_configured": false, 00:22:55.235 "data_offset": 0, 00:22:55.235 "data_size": 0 00:22:55.235 } 00:22:55.235 ] 00:22:55.235 }' 00:22:55.235 01:05:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.235 01:05:29 -- common/autotest_common.sh@10 -- # set +x 00:22:55.820 01:05:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:56.079 [2024-11-18 01:05:30.358688] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:56.079 [2024-11-18 01:05:30.358996] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:56.079 01:05:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:56.079 01:05:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:56.337 01:05:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:56.596 BaseBdev1 00:22:56.596 01:05:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:56.596 01:05:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:56.596 01:05:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:56.596 01:05:30 -- common/autotest_common.sh@899 -- # local i 00:22:56.596 01:05:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:56.596 01:05:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:56.596 01:05:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:56.854 01:05:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:56.854 [ 00:22:56.854 { 00:22:56.854 "name": "BaseBdev1", 00:22:56.854 "aliases": [ 00:22:56.854 "e1ead8ca-b31b-4cc9-9b3c-a1b235dce2c6" 00:22:56.854 ], 00:22:56.854 "product_name": "Malloc disk", 00:22:56.854 "block_size": 512, 00:22:56.854 "num_blocks": 65536, 00:22:56.854 "uuid": "e1ead8ca-b31b-4cc9-9b3c-a1b235dce2c6", 00:22:56.854 "assigned_rate_limits": { 00:22:56.854 "rw_ios_per_sec": 0, 00:22:56.854 "rw_mbytes_per_sec": 0, 00:22:56.854 "r_mbytes_per_sec": 0, 00:22:56.854 "w_mbytes_per_sec": 0 00:22:56.854 }, 00:22:56.854 "claimed": false, 00:22:56.854 "zoned": false, 00:22:56.854 "supported_io_types": { 00:22:56.854 "read": true, 00:22:56.854 "write": true, 00:22:56.854 "unmap": true, 00:22:56.854 "write_zeroes": true, 00:22:56.854 "flush": true, 00:22:56.854 "reset": true, 00:22:56.854 "compare": false, 00:22:56.854 "compare_and_write": false, 00:22:56.854 "abort": true, 00:22:56.854 "nvme_admin": false, 00:22:56.854 "nvme_io": false 00:22:56.854 }, 00:22:56.854 "memory_domains": [ 00:22:56.854 { 00:22:56.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.854 "dma_device_type": 2 00:22:56.854 } 00:22:56.854 ], 00:22:56.854 "driver_specific": {} 00:22:56.854 } 00:22:56.854 ] 00:22:57.113 01:05:31 -- common/autotest_common.sh@905 -- # return 0 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:57.113 [2024-11-18 01:05:31.443700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:57.113 [2024-11-18 01:05:31.446455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:57.113 [2024-11-18 01:05:31.446669] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:57.113 [2024-11-18 01:05:31.446752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:57.113 [2024-11-18 01:05:31.446814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.113 01:05:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.384 01:05:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.384 "name": "Existed_Raid", 00:22:57.384 "uuid": "84ccaa41-4060-4b5f-b0e9-797088f17c6b", 00:22:57.384 "strip_size_kb": 64, 00:22:57.384 "state": "configuring", 00:22:57.385 "raid_level": "raid5f", 00:22:57.385 "superblock": true, 00:22:57.385 "num_base_bdevs": 3, 00:22:57.385 "num_base_bdevs_discovered": 1, 00:22:57.385 "num_base_bdevs_operational": 3, 00:22:57.385 "base_bdevs_list": [ 00:22:57.385 { 00:22:57.385 "name": "BaseBdev1", 00:22:57.385 "uuid": "e1ead8ca-b31b-4cc9-9b3c-a1b235dce2c6", 00:22:57.385 "is_configured": true, 00:22:57.385 "data_offset": 2048, 00:22:57.385 "data_size": 63488 00:22:57.385 }, 00:22:57.385 { 00:22:57.385 "name": "BaseBdev2", 00:22:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.385 "is_configured": false, 00:22:57.385 "data_offset": 0, 00:22:57.385 "data_size": 0 00:22:57.385 }, 00:22:57.385 { 00:22:57.385 "name": "BaseBdev3", 00:22:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.385 "is_configured": false, 00:22:57.385 "data_offset": 0, 00:22:57.385 "data_size": 0 00:22:57.385 } 00:22:57.385 ] 00:22:57.385 }' 00:22:57.385 01:05:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.385 01:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:58.322 01:05:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:58.322 [2024-11-18 01:05:32.635797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:58.322 BaseBdev2 00:22:58.322 01:05:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:58.322 01:05:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:58.322 01:05:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:58.322 01:05:32 -- common/autotest_common.sh@899 -- # local i 00:22:58.322 01:05:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:58.322 01:05:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:58.322 01:05:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:58.580 01:05:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:58.838 [ 00:22:58.838 { 00:22:58.838 "name": "BaseBdev2", 00:22:58.838 "aliases": [ 00:22:58.838 "30b969a6-39ac-492e-8a14-a1abacbd1fee" 00:22:58.838 ], 00:22:58.838 "product_name": "Malloc disk", 00:22:58.838 "block_size": 512, 00:22:58.838 "num_blocks": 65536, 00:22:58.838 "uuid": "30b969a6-39ac-492e-8a14-a1abacbd1fee", 00:22:58.838 "assigned_rate_limits": { 00:22:58.838 "rw_ios_per_sec": 0, 00:22:58.838 "rw_mbytes_per_sec": 0, 00:22:58.838 "r_mbytes_per_sec": 0, 00:22:58.838 "w_mbytes_per_sec": 0 00:22:58.838 }, 00:22:58.838 "claimed": true, 00:22:58.838 "claim_type": "exclusive_write", 00:22:58.838 "zoned": false, 00:22:58.838 "supported_io_types": { 00:22:58.838 "read": true, 00:22:58.838 "write": true, 00:22:58.838 "unmap": true, 00:22:58.838 "write_zeroes": true, 00:22:58.838 "flush": true, 00:22:58.838 "reset": true, 00:22:58.838 "compare": false, 00:22:58.838 "compare_and_write": false, 00:22:58.838 "abort": true, 00:22:58.838 "nvme_admin": false, 00:22:58.838 "nvme_io": false 00:22:58.838 }, 00:22:58.838 "memory_domains": [ 00:22:58.838 { 00:22:58.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.838 "dma_device_type": 2 00:22:58.838 } 00:22:58.838 ], 00:22:58.838 "driver_specific": {} 00:22:58.838 } 00:22:58.838 ] 00:22:58.838 01:05:33 -- common/autotest_common.sh@905 -- # return 0 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.838 01:05:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.096 01:05:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.096 "name": "Existed_Raid", 00:22:59.096 "uuid": "84ccaa41-4060-4b5f-b0e9-797088f17c6b", 00:22:59.096 "strip_size_kb": 64, 00:22:59.096 "state": "configuring", 00:22:59.096 "raid_level": "raid5f", 00:22:59.096 "superblock": true, 00:22:59.096 "num_base_bdevs": 3, 00:22:59.096 "num_base_bdevs_discovered": 2, 00:22:59.096 "num_base_bdevs_operational": 3, 00:22:59.096 "base_bdevs_list": [ 00:22:59.096 { 00:22:59.096 "name": "BaseBdev1", 00:22:59.096 "uuid": "e1ead8ca-b31b-4cc9-9b3c-a1b235dce2c6", 00:22:59.097 "is_configured": true, 00:22:59.097 "data_offset": 2048, 00:22:59.097 "data_size": 63488 00:22:59.097 }, 00:22:59.097 { 00:22:59.097 "name": "BaseBdev2", 00:22:59.097 "uuid": "30b969a6-39ac-492e-8a14-a1abacbd1fee", 00:22:59.097 "is_configured": true, 00:22:59.097 "data_offset": 2048, 00:22:59.097 "data_size": 63488 00:22:59.097 }, 00:22:59.097 { 00:22:59.097 "name": "BaseBdev3", 00:22:59.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.097 "is_configured": false, 00:22:59.097 "data_offset": 0, 00:22:59.097 "data_size": 0 00:22:59.097 } 00:22:59.097 ] 00:22:59.097 }' 00:22:59.097 01:05:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.097 01:05:33 -- common/autotest_common.sh@10 -- # set +x 00:22:59.664 01:05:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:59.923 [2024-11-18 01:05:34.105594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:59.923 [2024-11-18 01:05:34.106157] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:22:59.923 [2024-11-18 01:05:34.106314] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:59.923 [2024-11-18 01:05:34.106505] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:59.923 [2024-11-18 01:05:34.107423] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:22:59.923 [2024-11-18 01:05:34.107554] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:22:59.923 BaseBdev3 00:22:59.923 [2024-11-18 01:05:34.107853] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.923 01:05:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:59.923 01:05:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:59.923 01:05:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:59.923 01:05:34 -- common/autotest_common.sh@899 -- # local i 00:22:59.923 01:05:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:59.923 01:05:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:59.923 01:05:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.181 01:05:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:00.440 [ 00:23:00.440 { 00:23:00.440 "name": "BaseBdev3", 00:23:00.440 "aliases": [ 00:23:00.440 "e45e1449-6314-4851-a8d8-2ea299638fff" 00:23:00.440 ], 00:23:00.440 "product_name": "Malloc disk", 00:23:00.440 "block_size": 512, 00:23:00.440 "num_blocks": 65536, 00:23:00.440 "uuid": "e45e1449-6314-4851-a8d8-2ea299638fff", 00:23:00.440 "assigned_rate_limits": { 00:23:00.440 "rw_ios_per_sec": 0, 00:23:00.440 "rw_mbytes_per_sec": 0, 00:23:00.440 "r_mbytes_per_sec": 0, 00:23:00.440 "w_mbytes_per_sec": 0 00:23:00.440 }, 00:23:00.440 "claimed": true, 00:23:00.440 "claim_type": "exclusive_write", 00:23:00.440 "zoned": false, 00:23:00.440 "supported_io_types": { 00:23:00.440 "read": true, 00:23:00.440 "write": true, 00:23:00.440 "unmap": true, 00:23:00.440 "write_zeroes": true, 00:23:00.440 "flush": true, 00:23:00.440 "reset": true, 00:23:00.440 "compare": false, 00:23:00.440 "compare_and_write": false, 00:23:00.440 "abort": true, 00:23:00.440 "nvme_admin": false, 00:23:00.440 "nvme_io": false 00:23:00.440 }, 00:23:00.440 "memory_domains": [ 00:23:00.440 { 00:23:00.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.440 "dma_device_type": 2 00:23:00.440 } 00:23:00.440 ], 00:23:00.440 "driver_specific": {} 00:23:00.440 } 00:23:00.440 ] 00:23:00.440 01:05:34 -- common/autotest_common.sh@905 -- # return 0 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.440 01:05:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.699 01:05:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.699 "name": "Existed_Raid", 00:23:00.699 "uuid": "84ccaa41-4060-4b5f-b0e9-797088f17c6b", 00:23:00.699 "strip_size_kb": 64, 00:23:00.699 "state": "online", 00:23:00.699 "raid_level": "raid5f", 00:23:00.699 "superblock": true, 00:23:00.699 "num_base_bdevs": 3, 00:23:00.699 "num_base_bdevs_discovered": 3, 00:23:00.699 "num_base_bdevs_operational": 3, 00:23:00.699 "base_bdevs_list": [ 00:23:00.699 { 00:23:00.699 "name": "BaseBdev1", 00:23:00.699 "uuid": "e1ead8ca-b31b-4cc9-9b3c-a1b235dce2c6", 00:23:00.699 "is_configured": true, 00:23:00.699 "data_offset": 2048, 00:23:00.699 "data_size": 63488 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "name": "BaseBdev2", 00:23:00.699 "uuid": "30b969a6-39ac-492e-8a14-a1abacbd1fee", 00:23:00.699 "is_configured": true, 00:23:00.699 "data_offset": 2048, 00:23:00.699 "data_size": 63488 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "name": "BaseBdev3", 00:23:00.699 "uuid": "e45e1449-6314-4851-a8d8-2ea299638fff", 00:23:00.699 "is_configured": true, 00:23:00.699 "data_offset": 2048, 00:23:00.699 "data_size": 63488 00:23:00.699 } 00:23:00.699 ] 00:23:00.699 }' 00:23:00.699 01:05:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.699 01:05:34 -- common/autotest_common.sh@10 -- # set +x 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:01.267 [2024-11-18 01:05:35.595027] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.267 01:05:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.528 01:05:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:01.528 "name": "Existed_Raid", 00:23:01.528 "uuid": "84ccaa41-4060-4b5f-b0e9-797088f17c6b", 00:23:01.528 "strip_size_kb": 64, 00:23:01.528 "state": "online", 00:23:01.528 "raid_level": "raid5f", 00:23:01.528 "superblock": true, 00:23:01.528 "num_base_bdevs": 3, 00:23:01.528 "num_base_bdevs_discovered": 2, 00:23:01.528 "num_base_bdevs_operational": 2, 00:23:01.528 "base_bdevs_list": [ 00:23:01.528 { 00:23:01.528 "name": null, 00:23:01.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.528 "is_configured": false, 00:23:01.528 "data_offset": 2048, 00:23:01.528 "data_size": 63488 00:23:01.528 }, 00:23:01.528 { 00:23:01.528 "name": "BaseBdev2", 00:23:01.528 "uuid": "30b969a6-39ac-492e-8a14-a1abacbd1fee", 00:23:01.528 "is_configured": true, 00:23:01.528 "data_offset": 2048, 00:23:01.528 "data_size": 63488 00:23:01.528 }, 00:23:01.528 { 00:23:01.528 "name": "BaseBdev3", 00:23:01.528 "uuid": "e45e1449-6314-4851-a8d8-2ea299638fff", 00:23:01.528 "is_configured": true, 00:23:01.528 "data_offset": 2048, 00:23:01.528 "data_size": 63488 00:23:01.528 } 00:23:01.528 ] 00:23:01.528 }' 00:23:01.528 01:05:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:01.528 01:05:35 -- common/autotest_common.sh@10 -- # set +x 00:23:02.096 01:05:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:02.096 01:05:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:02.096 01:05:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.096 01:05:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:02.355 01:05:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:02.355 01:05:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:02.355 01:05:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:02.614 [2024-11-18 01:05:36.967395] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:02.614 [2024-11-18 01:05:36.967707] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:02.614 [2024-11-18 01:05:36.967906] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.614 01:05:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:02.614 01:05:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:02.873 01:05:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.873 01:05:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:02.873 01:05:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:02.873 01:05:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:02.873 01:05:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:03.132 [2024-11-18 01:05:37.381595] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:03.132 [2024-11-18 01:05:37.382012] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:23:03.132 01:05:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:03.132 01:05:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:03.132 01:05:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.132 01:05:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:03.392 01:05:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:03.392 01:05:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:03.392 01:05:37 -- bdev/bdev_raid.sh@287 -- # killprocess 137827 00:23:03.392 01:05:37 -- common/autotest_common.sh@936 -- # '[' -z 137827 ']' 00:23:03.392 01:05:37 -- common/autotest_common.sh@940 -- # kill -0 137827 00:23:03.392 01:05:37 -- common/autotest_common.sh@941 -- # uname 00:23:03.392 01:05:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.392 01:05:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137827 00:23:03.392 01:05:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:03.392 01:05:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:03.392 01:05:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137827' 00:23:03.392 killing process with pid 137827 00:23:03.392 01:05:37 -- common/autotest_common.sh@955 -- # kill 137827 00:23:03.392 [2024-11-18 01:05:37.665693] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:03.392 [2024-11-18 01:05:37.665805] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:03.392 01:05:37 -- common/autotest_common.sh@960 -- # wait 137827 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:03.961 00:23:03.961 real 0m11.883s 00:23:03.961 user 0m20.761s 00:23:03.961 sys 0m2.249s 00:23:03.961 01:05:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:03.961 01:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:03.961 ************************************ 00:23:03.961 END TEST raid5f_state_function_test_sb 00:23:03.961 ************************************ 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:03.961 01:05:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:03.961 01:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:03.961 01:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:03.961 ************************************ 00:23:03.961 START TEST raid5f_superblock_test 00:23:03.961 ************************************ 00:23:03.961 01:05:38 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=138199 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 138199 /var/tmp/spdk-raid.sock 00:23:03.961 01:05:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:03.961 01:05:38 -- common/autotest_common.sh@829 -- # '[' -z 138199 ']' 00:23:03.961 01:05:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:03.961 01:05:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:03.961 01:05:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:03.961 01:05:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.961 01:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:03.961 [2024-11-18 01:05:38.224073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:03.961 [2024-11-18 01:05:38.224370] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138199 ] 00:23:04.220 [2024-11-18 01:05:38.378429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.220 [2024-11-18 01:05:38.458847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.220 [2024-11-18 01:05:38.537952] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.789 01:05:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.789 01:05:39 -- common/autotest_common.sh@862 -- # return 0 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:04.789 01:05:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:05.048 malloc1 00:23:05.048 01:05:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:05.307 [2024-11-18 01:05:39.599218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:05.307 [2024-11-18 01:05:39.599363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.307 [2024-11-18 01:05:39.599411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:05.307 [2024-11-18 01:05:39.599484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.307 [2024-11-18 01:05:39.602474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.307 [2024-11-18 01:05:39.602545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:05.307 pt1 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:05.308 01:05:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:05.567 malloc2 00:23:05.567 01:05:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:05.825 [2024-11-18 01:05:39.991066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:05.825 [2024-11-18 01:05:39.991179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.825 [2024-11-18 01:05:39.991220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:05.825 [2024-11-18 01:05:39.991270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.825 [2024-11-18 01:05:39.993992] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.825 [2024-11-18 01:05:39.994048] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:05.825 pt2 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:05.825 01:05:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:05.825 malloc3 00:23:06.084 01:05:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:06.084 [2024-11-18 01:05:40.447618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:06.084 [2024-11-18 01:05:40.447732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.084 [2024-11-18 01:05:40.447776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:06.084 [2024-11-18 01:05:40.447824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.084 [2024-11-18 01:05:40.450575] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.084 [2024-11-18 01:05:40.450636] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:06.084 pt3 00:23:06.084 01:05:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:06.084 01:05:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:06.084 01:05:40 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:06.343 [2024-11-18 01:05:40.643829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:06.343 [2024-11-18 01:05:40.646861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:06.343 [2024-11-18 01:05:40.646929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:06.343 [2024-11-18 01:05:40.647169] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:06.343 [2024-11-18 01:05:40.647180] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:06.343 [2024-11-18 01:05:40.647351] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:23:06.343 [2024-11-18 01:05:40.648138] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:06.343 [2024-11-18 01:05:40.648159] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:23:06.343 [2024-11-18 01:05:40.648360] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.343 01:05:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.603 01:05:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.603 "name": "raid_bdev1", 00:23:06.603 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:06.603 "strip_size_kb": 64, 00:23:06.603 "state": "online", 00:23:06.603 "raid_level": "raid5f", 00:23:06.603 "superblock": true, 00:23:06.603 "num_base_bdevs": 3, 00:23:06.603 "num_base_bdevs_discovered": 3, 00:23:06.603 "num_base_bdevs_operational": 3, 00:23:06.603 "base_bdevs_list": [ 00:23:06.603 { 00:23:06.603 "name": "pt1", 00:23:06.603 "uuid": "f9cd5006-a485-5e2d-a4e8-454034ee0c62", 00:23:06.603 "is_configured": true, 00:23:06.603 "data_offset": 2048, 00:23:06.603 "data_size": 63488 00:23:06.603 }, 00:23:06.603 { 00:23:06.603 "name": "pt2", 00:23:06.603 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:06.603 "is_configured": true, 00:23:06.603 "data_offset": 2048, 00:23:06.603 "data_size": 63488 00:23:06.603 }, 00:23:06.603 { 00:23:06.603 "name": "pt3", 00:23:06.603 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:06.603 "is_configured": true, 00:23:06.603 "data_offset": 2048, 00:23:06.603 "data_size": 63488 00:23:06.603 } 00:23:06.603 ] 00:23:06.603 }' 00:23:06.603 01:05:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.603 01:05:40 -- common/autotest_common.sh@10 -- # set +x 00:23:07.171 01:05:41 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:07.171 01:05:41 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:07.430 [2024-11-18 01:05:41.688694] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.430 01:05:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b6a17222-3505-4377-bb28-c409aa290383 00:23:07.430 01:05:41 -- bdev/bdev_raid.sh@380 -- # '[' -z b6a17222-3505-4377-bb28-c409aa290383 ']' 00:23:07.430 01:05:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:07.689 [2024-11-18 01:05:41.968567] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:07.689 [2024-11-18 01:05:41.968609] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:07.689 [2024-11-18 01:05:41.968725] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.689 [2024-11-18 01:05:41.968833] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.689 [2024-11-18 01:05:41.968843] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:23:07.689 01:05:41 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.689 01:05:41 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:07.949 01:05:42 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:07.949 01:05:42 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:07.949 01:05:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:07.949 01:05:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:08.207 01:05:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:08.207 01:05:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:08.466 01:05:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:08.466 01:05:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:08.466 01:05:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:08.466 01:05:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:08.725 01:05:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:08.725 01:05:43 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:08.725 01:05:43 -- common/autotest_common.sh@650 -- # local es=0 00:23:08.725 01:05:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:08.725 01:05:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.725 01:05:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.725 01:05:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.725 01:05:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.725 01:05:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.725 01:05:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.725 01:05:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.725 01:05:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:08.725 01:05:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:08.985 [2024-11-18 01:05:43.220804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:08.985 [2024-11-18 01:05:43.223275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:08.985 [2024-11-18 01:05:43.223332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:08.985 [2024-11-18 01:05:43.223382] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:08.985 [2024-11-18 01:05:43.223482] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:08.985 [2024-11-18 01:05:43.223514] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:08.985 [2024-11-18 01:05:43.223563] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.985 [2024-11-18 01:05:43.223575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:23:08.985 request: 00:23:08.985 { 00:23:08.985 "name": "raid_bdev1", 00:23:08.985 "raid_level": "raid5f", 00:23:08.985 "base_bdevs": [ 00:23:08.985 "malloc1", 00:23:08.985 "malloc2", 00:23:08.985 "malloc3" 00:23:08.985 ], 00:23:08.985 "superblock": false, 00:23:08.985 "strip_size_kb": 64, 00:23:08.985 "method": "bdev_raid_create", 00:23:08.985 "req_id": 1 00:23:08.985 } 00:23:08.985 Got JSON-RPC error response 00:23:08.985 response: 00:23:08.985 { 00:23:08.985 "code": -17, 00:23:08.985 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:08.985 } 00:23:08.985 01:05:43 -- common/autotest_common.sh@653 -- # es=1 00:23:08.985 01:05:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.985 01:05:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.985 01:05:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.985 01:05:43 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:08.985 01:05:43 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.244 01:05:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:09.245 [2024-11-18 01:05:43.600795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:09.245 [2024-11-18 01:05:43.600897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.245 [2024-11-18 01:05:43.600938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:09.245 [2024-11-18 01:05:43.600972] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.245 [2024-11-18 01:05:43.603742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.245 [2024-11-18 01:05:43.603795] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:09.245 [2024-11-18 01:05:43.603903] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:09.245 [2024-11-18 01:05:43.603986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:09.245 pt1 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.245 01:05:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.504 01:05:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.504 "name": "raid_bdev1", 00:23:09.504 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:09.504 "strip_size_kb": 64, 00:23:09.504 "state": "configuring", 00:23:09.504 "raid_level": "raid5f", 00:23:09.504 "superblock": true, 00:23:09.504 "num_base_bdevs": 3, 00:23:09.504 "num_base_bdevs_discovered": 1, 00:23:09.504 "num_base_bdevs_operational": 3, 00:23:09.504 "base_bdevs_list": [ 00:23:09.504 { 00:23:09.504 "name": "pt1", 00:23:09.504 "uuid": "f9cd5006-a485-5e2d-a4e8-454034ee0c62", 00:23:09.504 "is_configured": true, 00:23:09.504 "data_offset": 2048, 00:23:09.504 "data_size": 63488 00:23:09.504 }, 00:23:09.504 { 00:23:09.504 "name": null, 00:23:09.504 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:09.504 "is_configured": false, 00:23:09.504 "data_offset": 2048, 00:23:09.504 "data_size": 63488 00:23:09.504 }, 00:23:09.504 { 00:23:09.504 "name": null, 00:23:09.504 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:09.504 "is_configured": false, 00:23:09.504 "data_offset": 2048, 00:23:09.504 "data_size": 63488 00:23:09.504 } 00:23:09.504 ] 00:23:09.504 }' 00:23:09.504 01:05:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.504 01:05:43 -- common/autotest_common.sh@10 -- # set +x 00:23:10.073 01:05:44 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:10.073 01:05:44 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:10.332 [2024-11-18 01:05:44.625034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:10.332 [2024-11-18 01:05:44.625147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.332 [2024-11-18 01:05:44.625194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:10.332 [2024-11-18 01:05:44.625239] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.332 [2024-11-18 01:05:44.625710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.332 [2024-11-18 01:05:44.625753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:10.332 [2024-11-18 01:05:44.625857] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:10.332 [2024-11-18 01:05:44.625879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:10.332 pt2 00:23:10.332 01:05:44 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:10.591 [2024-11-18 01:05:44.817138] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:10.591 01:05:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.592 01:05:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.851 01:05:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.851 "name": "raid_bdev1", 00:23:10.851 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:10.851 "strip_size_kb": 64, 00:23:10.851 "state": "configuring", 00:23:10.851 "raid_level": "raid5f", 00:23:10.851 "superblock": true, 00:23:10.851 "num_base_bdevs": 3, 00:23:10.851 "num_base_bdevs_discovered": 1, 00:23:10.851 "num_base_bdevs_operational": 3, 00:23:10.851 "base_bdevs_list": [ 00:23:10.851 { 00:23:10.851 "name": "pt1", 00:23:10.851 "uuid": "f9cd5006-a485-5e2d-a4e8-454034ee0c62", 00:23:10.851 "is_configured": true, 00:23:10.851 "data_offset": 2048, 00:23:10.851 "data_size": 63488 00:23:10.851 }, 00:23:10.851 { 00:23:10.851 "name": null, 00:23:10.851 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:10.851 "is_configured": false, 00:23:10.851 "data_offset": 2048, 00:23:10.851 "data_size": 63488 00:23:10.851 }, 00:23:10.851 { 00:23:10.851 "name": null, 00:23:10.851 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:10.851 "is_configured": false, 00:23:10.851 "data_offset": 2048, 00:23:10.851 "data_size": 63488 00:23:10.851 } 00:23:10.851 ] 00:23:10.851 }' 00:23:10.851 01:05:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.851 01:05:45 -- common/autotest_common.sh@10 -- # set +x 00:23:11.420 01:05:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:11.420 01:05:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:11.420 01:05:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:11.420 [2024-11-18 01:05:45.809245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:11.420 [2024-11-18 01:05:45.809373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.420 [2024-11-18 01:05:45.809411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:11.420 [2024-11-18 01:05:45.809442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.420 [2024-11-18 01:05:45.809924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.420 [2024-11-18 01:05:45.809969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:11.420 [2024-11-18 01:05:45.810064] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:11.420 [2024-11-18 01:05:45.810085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:11.420 pt2 00:23:11.680 01:05:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:11.680 01:05:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:11.680 01:05:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:11.680 [2024-11-18 01:05:45.989315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:11.680 [2024-11-18 01:05:45.989415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.680 [2024-11-18 01:05:45.989451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:11.680 [2024-11-18 01:05:45.989480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.680 [2024-11-18 01:05:45.989932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.680 [2024-11-18 01:05:45.989976] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:11.680 [2024-11-18 01:05:45.990074] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:11.680 [2024-11-18 01:05:45.990111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:11.680 [2024-11-18 01:05:45.990252] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:11.680 [2024-11-18 01:05:45.990261] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:11.680 [2024-11-18 01:05:45.990328] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:23:11.680 [2024-11-18 01:05:45.990915] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:11.680 [2024-11-18 01:05:45.990935] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:11.680 [2024-11-18 01:05:45.991039] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.680 pt3 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.680 01:05:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.940 01:05:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.940 "name": "raid_bdev1", 00:23:11.940 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:11.940 "strip_size_kb": 64, 00:23:11.940 "state": "online", 00:23:11.940 "raid_level": "raid5f", 00:23:11.940 "superblock": true, 00:23:11.940 "num_base_bdevs": 3, 00:23:11.940 "num_base_bdevs_discovered": 3, 00:23:11.940 "num_base_bdevs_operational": 3, 00:23:11.940 "base_bdevs_list": [ 00:23:11.940 { 00:23:11.940 "name": "pt1", 00:23:11.940 "uuid": "f9cd5006-a485-5e2d-a4e8-454034ee0c62", 00:23:11.940 "is_configured": true, 00:23:11.940 "data_offset": 2048, 00:23:11.940 "data_size": 63488 00:23:11.940 }, 00:23:11.940 { 00:23:11.940 "name": "pt2", 00:23:11.940 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:11.940 "is_configured": true, 00:23:11.940 "data_offset": 2048, 00:23:11.940 "data_size": 63488 00:23:11.940 }, 00:23:11.940 { 00:23:11.940 "name": "pt3", 00:23:11.940 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:11.940 "is_configured": true, 00:23:11.940 "data_offset": 2048, 00:23:11.940 "data_size": 63488 00:23:11.940 } 00:23:11.940 ] 00:23:11.940 }' 00:23:11.940 01:05:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.940 01:05:46 -- common/autotest_common.sh@10 -- # set +x 00:23:12.523 01:05:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:12.523 01:05:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:12.794 [2024-11-18 01:05:47.106987] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.794 01:05:47 -- bdev/bdev_raid.sh@430 -- # '[' b6a17222-3505-4377-bb28-c409aa290383 '!=' b6a17222-3505-4377-bb28-c409aa290383 ']' 00:23:12.794 01:05:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:12.794 01:05:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:12.794 01:05:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:12.794 01:05:47 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:13.053 [2024-11-18 01:05:47.302846] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.053 01:05:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.054 01:05:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.054 01:05:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.313 01:05:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.313 "name": "raid_bdev1", 00:23:13.313 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:13.313 "strip_size_kb": 64, 00:23:13.313 "state": "online", 00:23:13.313 "raid_level": "raid5f", 00:23:13.313 "superblock": true, 00:23:13.313 "num_base_bdevs": 3, 00:23:13.313 "num_base_bdevs_discovered": 2, 00:23:13.313 "num_base_bdevs_operational": 2, 00:23:13.313 "base_bdevs_list": [ 00:23:13.313 { 00:23:13.313 "name": null, 00:23:13.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.313 "is_configured": false, 00:23:13.313 "data_offset": 2048, 00:23:13.313 "data_size": 63488 00:23:13.313 }, 00:23:13.313 { 00:23:13.313 "name": "pt2", 00:23:13.313 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:13.313 "is_configured": true, 00:23:13.313 "data_offset": 2048, 00:23:13.313 "data_size": 63488 00:23:13.313 }, 00:23:13.313 { 00:23:13.313 "name": "pt3", 00:23:13.313 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:13.313 "is_configured": true, 00:23:13.313 "data_offset": 2048, 00:23:13.313 "data_size": 63488 00:23:13.313 } 00:23:13.313 ] 00:23:13.313 }' 00:23:13.313 01:05:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.313 01:05:47 -- common/autotest_common.sh@10 -- # set +x 00:23:13.880 01:05:48 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:14.139 [2024-11-18 01:05:48.323021] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.139 [2024-11-18 01:05:48.323072] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.139 [2024-11-18 01:05:48.323158] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.139 [2024-11-18 01:05:48.323235] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.139 [2024-11-18 01:05:48.323245] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:14.140 01:05:48 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.140 01:05:48 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:14.140 01:05:48 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:14.140 01:05:48 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:14.140 01:05:48 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:14.140 01:05:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:14.140 01:05:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:14.399 01:05:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:14.399 01:05:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:14.399 01:05:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:14.658 01:05:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:14.658 01:05:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:14.658 01:05:48 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:14.658 01:05:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:14.658 01:05:48 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:14.929 [2024-11-18 01:05:49.175139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:14.929 [2024-11-18 01:05:49.175250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.929 [2024-11-18 01:05:49.175290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:14.929 [2024-11-18 01:05:49.175314] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.929 [2024-11-18 01:05:49.178288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.929 [2024-11-18 01:05:49.178358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:14.929 [2024-11-18 01:05:49.178474] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:14.929 [2024-11-18 01:05:49.178511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:14.929 pt2 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.929 01:05:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.192 01:05:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.192 "name": "raid_bdev1", 00:23:15.192 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:15.192 "strip_size_kb": 64, 00:23:15.192 "state": "configuring", 00:23:15.192 "raid_level": "raid5f", 00:23:15.192 "superblock": true, 00:23:15.192 "num_base_bdevs": 3, 00:23:15.192 "num_base_bdevs_discovered": 1, 00:23:15.192 "num_base_bdevs_operational": 2, 00:23:15.192 "base_bdevs_list": [ 00:23:15.192 { 00:23:15.192 "name": null, 00:23:15.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.192 "is_configured": false, 00:23:15.192 "data_offset": 2048, 00:23:15.192 "data_size": 63488 00:23:15.192 }, 00:23:15.192 { 00:23:15.192 "name": "pt2", 00:23:15.192 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:15.192 "is_configured": true, 00:23:15.192 "data_offset": 2048, 00:23:15.192 "data_size": 63488 00:23:15.192 }, 00:23:15.192 { 00:23:15.192 "name": null, 00:23:15.192 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:15.192 "is_configured": false, 00:23:15.192 "data_offset": 2048, 00:23:15.192 "data_size": 63488 00:23:15.192 } 00:23:15.192 ] 00:23:15.192 }' 00:23:15.192 01:05:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.193 01:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.761 01:05:49 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:15.761 01:05:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:15.761 01:05:49 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:15.761 01:05:49 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:16.020 [2024-11-18 01:05:50.179387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:16.020 [2024-11-18 01:05:50.179510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.020 [2024-11-18 01:05:50.179556] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:16.020 [2024-11-18 01:05:50.179581] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.020 [2024-11-18 01:05:50.180082] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.020 [2024-11-18 01:05:50.180124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:16.020 [2024-11-18 01:05:50.180233] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:16.020 [2024-11-18 01:05:50.180257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:16.020 [2024-11-18 01:05:50.180365] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:16.020 [2024-11-18 01:05:50.180381] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:16.020 [2024-11-18 01:05:50.180454] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:23:16.020 [2024-11-18 01:05:50.181147] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:16.020 [2024-11-18 01:05:50.181169] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:16.020 [2024-11-18 01:05:50.181403] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.020 pt3 00:23:16.020 01:05:50 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.021 01:05:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.280 01:05:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.280 "name": "raid_bdev1", 00:23:16.280 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:16.280 "strip_size_kb": 64, 00:23:16.280 "state": "online", 00:23:16.280 "raid_level": "raid5f", 00:23:16.280 "superblock": true, 00:23:16.280 "num_base_bdevs": 3, 00:23:16.280 "num_base_bdevs_discovered": 2, 00:23:16.280 "num_base_bdevs_operational": 2, 00:23:16.280 "base_bdevs_list": [ 00:23:16.280 { 00:23:16.280 "name": null, 00:23:16.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.280 "is_configured": false, 00:23:16.280 "data_offset": 2048, 00:23:16.280 "data_size": 63488 00:23:16.280 }, 00:23:16.280 { 00:23:16.280 "name": "pt2", 00:23:16.280 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:16.280 "is_configured": true, 00:23:16.280 "data_offset": 2048, 00:23:16.280 "data_size": 63488 00:23:16.280 }, 00:23:16.280 { 00:23:16.280 "name": "pt3", 00:23:16.280 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:16.280 "is_configured": true, 00:23:16.280 "data_offset": 2048, 00:23:16.280 "data_size": 63488 00:23:16.280 } 00:23:16.280 ] 00:23:16.280 }' 00:23:16.280 01:05:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.280 01:05:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.849 01:05:51 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:16.849 01:05:51 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:16.849 [2024-11-18 01:05:51.211678] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.849 [2024-11-18 01:05:51.211729] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.849 [2024-11-18 01:05:51.211828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.849 [2024-11-18 01:05:51.211901] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.849 [2024-11-18 01:05:51.211912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:16.849 01:05:51 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.849 01:05:51 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:17.109 01:05:51 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:17.109 01:05:51 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:17.109 01:05:51 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:17.368 [2024-11-18 01:05:51.655357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:17.368 [2024-11-18 01:05:51.655482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.368 [2024-11-18 01:05:51.655530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:17.368 [2024-11-18 01:05:51.655555] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.368 [2024-11-18 01:05:51.658810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.368 [2024-11-18 01:05:51.658878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:17.368 [2024-11-18 01:05:51.659009] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:17.368 [2024-11-18 01:05:51.659056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:17.368 pt1 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.368 01:05:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.627 01:05:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.628 "name": "raid_bdev1", 00:23:17.628 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:17.628 "strip_size_kb": 64, 00:23:17.628 "state": "configuring", 00:23:17.628 "raid_level": "raid5f", 00:23:17.628 "superblock": true, 00:23:17.628 "num_base_bdevs": 3, 00:23:17.628 "num_base_bdevs_discovered": 1, 00:23:17.628 "num_base_bdevs_operational": 3, 00:23:17.628 "base_bdevs_list": [ 00:23:17.628 { 00:23:17.628 "name": "pt1", 00:23:17.628 "uuid": "f9cd5006-a485-5e2d-a4e8-454034ee0c62", 00:23:17.628 "is_configured": true, 00:23:17.628 "data_offset": 2048, 00:23:17.628 "data_size": 63488 00:23:17.628 }, 00:23:17.628 { 00:23:17.628 "name": null, 00:23:17.628 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:17.628 "is_configured": false, 00:23:17.628 "data_offset": 2048, 00:23:17.628 "data_size": 63488 00:23:17.628 }, 00:23:17.628 { 00:23:17.628 "name": null, 00:23:17.628 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:17.628 "is_configured": false, 00:23:17.628 "data_offset": 2048, 00:23:17.628 "data_size": 63488 00:23:17.628 } 00:23:17.628 ] 00:23:17.628 }' 00:23:17.628 01:05:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.628 01:05:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.195 01:05:52 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:18.195 01:05:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:18.195 01:05:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:18.454 01:05:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:18.454 01:05:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:18.454 01:05:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:18.454 01:05:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:18.454 01:05:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:18.454 01:05:52 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:18.454 01:05:52 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:18.714 [2024-11-18 01:05:52.956995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:18.714 [2024-11-18 01:05:52.957117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.714 [2024-11-18 01:05:52.957157] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:18.714 [2024-11-18 01:05:52.957188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.714 [2024-11-18 01:05:52.957728] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.714 [2024-11-18 01:05:52.957782] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:18.714 [2024-11-18 01:05:52.957903] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:18.714 [2024-11-18 01:05:52.957917] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:18.714 [2024-11-18 01:05:52.957925] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.714 [2024-11-18 01:05:52.957957] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:23:18.714 [2024-11-18 01:05:52.958029] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:18.714 pt3 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.714 01:05:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.974 01:05:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.974 "name": "raid_bdev1", 00:23:18.974 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:18.974 "strip_size_kb": 64, 00:23:18.974 "state": "configuring", 00:23:18.974 "raid_level": "raid5f", 00:23:18.974 "superblock": true, 00:23:18.974 "num_base_bdevs": 3, 00:23:18.974 "num_base_bdevs_discovered": 1, 00:23:18.974 "num_base_bdevs_operational": 2, 00:23:18.974 "base_bdevs_list": [ 00:23:18.974 { 00:23:18.974 "name": null, 00:23:18.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.974 "is_configured": false, 00:23:18.974 "data_offset": 2048, 00:23:18.974 "data_size": 63488 00:23:18.974 }, 00:23:18.974 { 00:23:18.974 "name": null, 00:23:18.974 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:18.974 "is_configured": false, 00:23:18.974 "data_offset": 2048, 00:23:18.974 "data_size": 63488 00:23:18.974 }, 00:23:18.974 { 00:23:18.974 "name": "pt3", 00:23:18.974 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:18.974 "is_configured": true, 00:23:18.974 "data_offset": 2048, 00:23:18.974 "data_size": 63488 00:23:18.974 } 00:23:18.974 ] 00:23:18.974 }' 00:23:18.974 01:05:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.974 01:05:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.542 01:05:53 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:19.542 01:05:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:19.542 01:05:53 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.801 [2024-11-18 01:05:53.989251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.801 [2024-11-18 01:05:53.989381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.801 [2024-11-18 01:05:53.989421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:19.801 [2024-11-18 01:05:53.989453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.801 [2024-11-18 01:05:53.989981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.801 [2024-11-18 01:05:53.990030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.801 [2024-11-18 01:05:53.990147] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:19.801 [2024-11-18 01:05:53.990171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.801 [2024-11-18 01:05:53.990296] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:19.801 [2024-11-18 01:05:53.990305] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:19.801 [2024-11-18 01:05:53.990382] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:23:19.801 [2024-11-18 01:05:53.991119] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:19.801 [2024-11-18 01:05:53.991143] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:19.801 [2024-11-18 01:05:53.991304] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.801 pt2 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.801 01:05:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.060 01:05:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.060 "name": "raid_bdev1", 00:23:20.060 "uuid": "b6a17222-3505-4377-bb28-c409aa290383", 00:23:20.060 "strip_size_kb": 64, 00:23:20.060 "state": "online", 00:23:20.060 "raid_level": "raid5f", 00:23:20.060 "superblock": true, 00:23:20.060 "num_base_bdevs": 3, 00:23:20.060 "num_base_bdevs_discovered": 2, 00:23:20.060 "num_base_bdevs_operational": 2, 00:23:20.060 "base_bdevs_list": [ 00:23:20.060 { 00:23:20.060 "name": null, 00:23:20.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.060 "is_configured": false, 00:23:20.060 "data_offset": 2048, 00:23:20.060 "data_size": 63488 00:23:20.060 }, 00:23:20.060 { 00:23:20.060 "name": "pt2", 00:23:20.060 "uuid": "a1277905-7a61-5a66-9363-1b657e931f64", 00:23:20.060 "is_configured": true, 00:23:20.060 "data_offset": 2048, 00:23:20.060 "data_size": 63488 00:23:20.060 }, 00:23:20.060 { 00:23:20.060 "name": "pt3", 00:23:20.060 "uuid": "2f881094-1060-56e9-a077-f7a3e3122c15", 00:23:20.060 "is_configured": true, 00:23:20.060 "data_offset": 2048, 00:23:20.060 "data_size": 63488 00:23:20.060 } 00:23:20.060 ] 00:23:20.060 }' 00:23:20.060 01:05:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.060 01:05:54 -- common/autotest_common.sh@10 -- # set +x 00:23:20.628 01:05:54 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:20.628 01:05:54 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:20.628 [2024-11-18 01:05:54.965790] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.628 01:05:54 -- bdev/bdev_raid.sh@506 -- # '[' b6a17222-3505-4377-bb28-c409aa290383 '!=' b6a17222-3505-4377-bb28-c409aa290383 ']' 00:23:20.628 01:05:54 -- bdev/bdev_raid.sh@511 -- # killprocess 138199 00:23:20.628 01:05:54 -- common/autotest_common.sh@936 -- # '[' -z 138199 ']' 00:23:20.628 01:05:54 -- common/autotest_common.sh@940 -- # kill -0 138199 00:23:20.628 01:05:54 -- common/autotest_common.sh@941 -- # uname 00:23:20.628 01:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.628 01:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138199 00:23:20.628 01:05:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.628 01:05:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.628 01:05:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138199' 00:23:20.628 killing process with pid 138199 00:23:20.628 01:05:55 -- common/autotest_common.sh@955 -- # kill 138199 00:23:20.628 [2024-11-18 01:05:55.018130] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.628 01:05:55 -- common/autotest_common.sh@960 -- # wait 138199 00:23:20.628 [2024-11-18 01:05:55.018252] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.628 [2024-11-18 01:05:55.018323] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.628 [2024-11-18 01:05:55.018334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:20.887 [2024-11-18 01:05:55.082089] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:21.146 01:05:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:21.146 00:23:21.146 real 0m17.330s 00:23:21.146 user 0m31.281s 00:23:21.146 sys 0m3.087s 00:23:21.146 01:05:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:21.146 01:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:21.146 ************************************ 00:23:21.146 END TEST raid5f_superblock_test 00:23:21.146 ************************************ 00:23:21.146 01:05:55 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:21.146 01:05:55 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:21.146 01:05:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:21.146 01:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.146 01:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:21.406 ************************************ 00:23:21.406 START TEST raid5f_rebuild_test 00:23:21.406 ************************************ 00:23:21.406 01:05:55 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=138781 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138781 /var/tmp/spdk-raid.sock 00:23:21.406 01:05:55 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:21.406 01:05:55 -- common/autotest_common.sh@829 -- # '[' -z 138781 ']' 00:23:21.406 01:05:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:21.406 01:05:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:21.406 01:05:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:21.406 01:05:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.406 01:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:21.406 [2024-11-18 01:05:55.633099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:21.406 [2024-11-18 01:05:55.633371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138781 ] 00:23:21.406 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:21.406 Zero copy mechanism will not be used. 00:23:21.406 [2024-11-18 01:05:55.792956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.665 [2024-11-18 01:05:55.888269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.665 [2024-11-18 01:05:55.974656] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.234 01:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.234 01:05:56 -- common/autotest_common.sh@862 -- # return 0 00:23:22.234 01:05:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:22.234 01:05:56 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:22.234 01:05:56 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:22.501 BaseBdev1 00:23:22.501 01:05:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:22.501 01:05:56 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:22.501 01:05:56 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:22.768 BaseBdev2 00:23:22.768 01:05:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:22.768 01:05:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:22.768 01:05:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:23.043 BaseBdev3 00:23:23.043 01:05:57 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:23.302 spare_malloc 00:23:23.302 01:05:57 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:23.302 spare_delay 00:23:23.561 01:05:57 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:23.561 [2024-11-18 01:05:57.875679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:23.561 [2024-11-18 01:05:57.875833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.561 [2024-11-18 01:05:57.875883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:23.561 [2024-11-18 01:05:57.875949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.561 [2024-11-18 01:05:57.878970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.561 [2024-11-18 01:05:57.879043] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:23.561 spare 00:23:23.561 01:05:57 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:23.821 [2024-11-18 01:05:58.067983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:23.821 [2024-11-18 01:05:58.070559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:23.821 [2024-11-18 01:05:58.070624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:23.821 [2024-11-18 01:05:58.070726] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:23.821 [2024-11-18 01:05:58.070737] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:23.821 [2024-11-18 01:05:58.070970] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:23:23.821 [2024-11-18 01:05:58.071883] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:23.821 [2024-11-18 01:05:58.071907] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:23:23.821 [2024-11-18 01:05:58.072185] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.821 01:05:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.080 01:05:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:24.080 "name": "raid_bdev1", 00:23:24.080 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:24.080 "strip_size_kb": 64, 00:23:24.080 "state": "online", 00:23:24.080 "raid_level": "raid5f", 00:23:24.080 "superblock": false, 00:23:24.080 "num_base_bdevs": 3, 00:23:24.080 "num_base_bdevs_discovered": 3, 00:23:24.080 "num_base_bdevs_operational": 3, 00:23:24.080 "base_bdevs_list": [ 00:23:24.080 { 00:23:24.080 "name": "BaseBdev1", 00:23:24.080 "uuid": "27528282-63c5-4590-88d1-6921c54955f2", 00:23:24.080 "is_configured": true, 00:23:24.080 "data_offset": 0, 00:23:24.080 "data_size": 65536 00:23:24.080 }, 00:23:24.080 { 00:23:24.080 "name": "BaseBdev2", 00:23:24.080 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:24.080 "is_configured": true, 00:23:24.080 "data_offset": 0, 00:23:24.080 "data_size": 65536 00:23:24.080 }, 00:23:24.080 { 00:23:24.080 "name": "BaseBdev3", 00:23:24.080 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:24.080 "is_configured": true, 00:23:24.080 "data_offset": 0, 00:23:24.080 "data_size": 65536 00:23:24.080 } 00:23:24.080 ] 00:23:24.080 }' 00:23:24.080 01:05:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:24.080 01:05:58 -- common/autotest_common.sh@10 -- # set +x 00:23:24.649 01:05:58 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:24.649 01:05:58 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:24.908 [2024-11-18 01:05:59.120566] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.908 01:05:59 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:24.908 01:05:59 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.908 01:05:59 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:25.166 01:05:59 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:25.166 01:05:59 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:25.166 01:05:59 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:25.166 01:05:59 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@12 -- # local i 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:25.166 01:05:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:25.425 [2024-11-18 01:05:59.580527] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:23:25.425 /dev/nbd0 00:23:25.425 01:05:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:25.425 01:05:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:25.425 01:05:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:25.425 01:05:59 -- common/autotest_common.sh@867 -- # local i 00:23:25.425 01:05:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:25.425 01:05:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:25.425 01:05:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:25.425 01:05:59 -- common/autotest_common.sh@871 -- # break 00:23:25.425 01:05:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:25.425 01:05:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:25.425 01:05:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:25.425 1+0 records in 00:23:25.425 1+0 records out 00:23:25.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300497 s, 13.6 MB/s 00:23:25.425 01:05:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.425 01:05:59 -- common/autotest_common.sh@884 -- # size=4096 00:23:25.425 01:05:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.425 01:05:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:25.425 01:05:59 -- common/autotest_common.sh@887 -- # return 0 00:23:25.425 01:05:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:25.425 01:05:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:25.425 01:05:59 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:25.425 01:05:59 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:25.425 01:05:59 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:25.425 01:05:59 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:25.685 512+0 records in 00:23:25.685 512+0 records out 00:23:25.685 67108864 bytes (67 MB, 64 MiB) copied, 0.305001 s, 220 MB/s 00:23:25.685 01:05:59 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:25.685 01:05:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:25.685 01:05:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:25.685 01:05:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:25.685 01:05:59 -- bdev/nbd_common.sh@51 -- # local i 00:23:25.685 01:05:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:25.685 01:05:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:25.944 [2024-11-18 01:06:00.191985] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@41 -- # break 00:23:25.944 01:06:00 -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.944 01:06:00 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:26.203 [2024-11-18 01:06:00.375512] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.203 01:06:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.462 01:06:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.462 "name": "raid_bdev1", 00:23:26.462 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:26.462 "strip_size_kb": 64, 00:23:26.462 "state": "online", 00:23:26.462 "raid_level": "raid5f", 00:23:26.462 "superblock": false, 00:23:26.462 "num_base_bdevs": 3, 00:23:26.462 "num_base_bdevs_discovered": 2, 00:23:26.462 "num_base_bdevs_operational": 2, 00:23:26.462 "base_bdevs_list": [ 00:23:26.462 { 00:23:26.462 "name": null, 00:23:26.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.462 "is_configured": false, 00:23:26.462 "data_offset": 0, 00:23:26.462 "data_size": 65536 00:23:26.462 }, 00:23:26.462 { 00:23:26.462 "name": "BaseBdev2", 00:23:26.462 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:26.462 "is_configured": true, 00:23:26.462 "data_offset": 0, 00:23:26.462 "data_size": 65536 00:23:26.462 }, 00:23:26.462 { 00:23:26.462 "name": "BaseBdev3", 00:23:26.462 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:26.462 "is_configured": true, 00:23:26.462 "data_offset": 0, 00:23:26.462 "data_size": 65536 00:23:26.462 } 00:23:26.462 ] 00:23:26.462 }' 00:23:26.462 01:06:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.462 01:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.031 01:06:01 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:27.031 [2024-11-18 01:06:01.387669] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:27.031 [2024-11-18 01:06:01.387758] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.031 [2024-11-18 01:06:01.394908] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027990 00:23:27.031 [2024-11-18 01:06:01.398193] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:27.031 01:06:01 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:28.411 "name": "raid_bdev1", 00:23:28.411 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:28.411 "strip_size_kb": 64, 00:23:28.411 "state": "online", 00:23:28.411 "raid_level": "raid5f", 00:23:28.411 "superblock": false, 00:23:28.411 "num_base_bdevs": 3, 00:23:28.411 "num_base_bdevs_discovered": 3, 00:23:28.411 "num_base_bdevs_operational": 3, 00:23:28.411 "process": { 00:23:28.411 "type": "rebuild", 00:23:28.411 "target": "spare", 00:23:28.411 "progress": { 00:23:28.411 "blocks": 24576, 00:23:28.411 "percent": 18 00:23:28.411 } 00:23:28.411 }, 00:23:28.411 "base_bdevs_list": [ 00:23:28.411 { 00:23:28.411 "name": "spare", 00:23:28.411 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:28.411 "is_configured": true, 00:23:28.411 "data_offset": 0, 00:23:28.411 "data_size": 65536 00:23:28.411 }, 00:23:28.411 { 00:23:28.411 "name": "BaseBdev2", 00:23:28.411 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:28.411 "is_configured": true, 00:23:28.411 "data_offset": 0, 00:23:28.411 "data_size": 65536 00:23:28.411 }, 00:23:28.411 { 00:23:28.411 "name": "BaseBdev3", 00:23:28.411 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:28.411 "is_configured": true, 00:23:28.411 "data_offset": 0, 00:23:28.411 "data_size": 65536 00:23:28.411 } 00:23:28.411 ] 00:23:28.411 }' 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:28.411 01:06:02 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:28.671 [2024-11-18 01:06:02.984295] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:28.671 [2024-11-18 01:06:03.014526] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:28.671 [2024-11-18 01:06:03.014664] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.671 01:06:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.241 01:06:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.241 "name": "raid_bdev1", 00:23:29.241 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:29.241 "strip_size_kb": 64, 00:23:29.241 "state": "online", 00:23:29.241 "raid_level": "raid5f", 00:23:29.241 "superblock": false, 00:23:29.241 "num_base_bdevs": 3, 00:23:29.241 "num_base_bdevs_discovered": 2, 00:23:29.241 "num_base_bdevs_operational": 2, 00:23:29.241 "base_bdevs_list": [ 00:23:29.241 { 00:23:29.241 "name": null, 00:23:29.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.241 "is_configured": false, 00:23:29.241 "data_offset": 0, 00:23:29.241 "data_size": 65536 00:23:29.241 }, 00:23:29.241 { 00:23:29.241 "name": "BaseBdev2", 00:23:29.241 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:29.241 "is_configured": true, 00:23:29.241 "data_offset": 0, 00:23:29.241 "data_size": 65536 00:23:29.241 }, 00:23:29.241 { 00:23:29.241 "name": "BaseBdev3", 00:23:29.241 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:29.241 "is_configured": true, 00:23:29.241 "data_offset": 0, 00:23:29.241 "data_size": 65536 00:23:29.241 } 00:23:29.241 ] 00:23:29.241 }' 00:23:29.241 01:06:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.241 01:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:29.809 01:06:04 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:29.809 01:06:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:29.809 01:06:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:29.809 01:06:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:29.809 01:06:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:29.809 01:06:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.809 01:06:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.068 01:06:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:30.068 "name": "raid_bdev1", 00:23:30.068 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:30.068 "strip_size_kb": 64, 00:23:30.068 "state": "online", 00:23:30.068 "raid_level": "raid5f", 00:23:30.068 "superblock": false, 00:23:30.068 "num_base_bdevs": 3, 00:23:30.068 "num_base_bdevs_discovered": 2, 00:23:30.068 "num_base_bdevs_operational": 2, 00:23:30.068 "base_bdevs_list": [ 00:23:30.068 { 00:23:30.068 "name": null, 00:23:30.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.068 "is_configured": false, 00:23:30.068 "data_offset": 0, 00:23:30.068 "data_size": 65536 00:23:30.068 }, 00:23:30.068 { 00:23:30.068 "name": "BaseBdev2", 00:23:30.068 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:30.068 "is_configured": true, 00:23:30.068 "data_offset": 0, 00:23:30.068 "data_size": 65536 00:23:30.068 }, 00:23:30.068 { 00:23:30.068 "name": "BaseBdev3", 00:23:30.068 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:30.068 "is_configured": true, 00:23:30.068 "data_offset": 0, 00:23:30.068 "data_size": 65536 00:23:30.068 } 00:23:30.068 ] 00:23:30.068 }' 00:23:30.068 01:06:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:30.068 01:06:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:30.068 01:06:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:30.068 01:06:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:30.068 01:06:04 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:30.328 [2024-11-18 01:06:04.541569] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:30.328 [2024-11-18 01:06:04.541639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:30.328 [2024-11-18 01:06:04.548674] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027b30 00:23:30.328 [2024-11-18 01:06:04.551415] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:30.328 01:06:04 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:31.266 01:06:05 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.266 01:06:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.266 01:06:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:31.266 01:06:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:31.266 01:06:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.266 01:06:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.266 01:06:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.525 "name": "raid_bdev1", 00:23:31.525 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:31.525 "strip_size_kb": 64, 00:23:31.525 "state": "online", 00:23:31.525 "raid_level": "raid5f", 00:23:31.525 "superblock": false, 00:23:31.525 "num_base_bdevs": 3, 00:23:31.525 "num_base_bdevs_discovered": 3, 00:23:31.525 "num_base_bdevs_operational": 3, 00:23:31.525 "process": { 00:23:31.525 "type": "rebuild", 00:23:31.525 "target": "spare", 00:23:31.525 "progress": { 00:23:31.525 "blocks": 24576, 00:23:31.525 "percent": 18 00:23:31.525 } 00:23:31.525 }, 00:23:31.525 "base_bdevs_list": [ 00:23:31.525 { 00:23:31.525 "name": "spare", 00:23:31.525 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:31.525 "is_configured": true, 00:23:31.525 "data_offset": 0, 00:23:31.525 "data_size": 65536 00:23:31.525 }, 00:23:31.525 { 00:23:31.525 "name": "BaseBdev2", 00:23:31.525 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:31.525 "is_configured": true, 00:23:31.525 "data_offset": 0, 00:23:31.525 "data_size": 65536 00:23:31.525 }, 00:23:31.525 { 00:23:31.525 "name": "BaseBdev3", 00:23:31.525 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:31.525 "is_configured": true, 00:23:31.525 "data_offset": 0, 00:23:31.525 "data_size": 65536 00:23:31.525 } 00:23:31.525 ] 00:23:31.525 }' 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@657 -- # local timeout=568 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:31.525 01:06:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.526 01:06:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.526 01:06:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.785 01:06:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.785 "name": "raid_bdev1", 00:23:31.785 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:31.785 "strip_size_kb": 64, 00:23:31.785 "state": "online", 00:23:31.785 "raid_level": "raid5f", 00:23:31.785 "superblock": false, 00:23:31.785 "num_base_bdevs": 3, 00:23:31.785 "num_base_bdevs_discovered": 3, 00:23:31.785 "num_base_bdevs_operational": 3, 00:23:31.785 "process": { 00:23:31.785 "type": "rebuild", 00:23:31.785 "target": "spare", 00:23:31.785 "progress": { 00:23:31.785 "blocks": 30720, 00:23:31.785 "percent": 23 00:23:31.785 } 00:23:31.785 }, 00:23:31.785 "base_bdevs_list": [ 00:23:31.785 { 00:23:31.785 "name": "spare", 00:23:31.785 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:31.785 "is_configured": true, 00:23:31.785 "data_offset": 0, 00:23:31.785 "data_size": 65536 00:23:31.785 }, 00:23:31.785 { 00:23:31.785 "name": "BaseBdev2", 00:23:31.785 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:31.785 "is_configured": true, 00:23:31.785 "data_offset": 0, 00:23:31.785 "data_size": 65536 00:23:31.785 }, 00:23:31.785 { 00:23:31.785 "name": "BaseBdev3", 00:23:31.785 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:31.785 "is_configured": true, 00:23:31.785 "data_offset": 0, 00:23:31.785 "data_size": 65536 00:23:31.785 } 00:23:31.785 ] 00:23:31.785 }' 00:23:31.785 01:06:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:32.044 01:06:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.044 01:06:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:32.044 01:06:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.044 01:06:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.982 01:06:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.241 01:06:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:33.241 "name": "raid_bdev1", 00:23:33.241 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:33.241 "strip_size_kb": 64, 00:23:33.241 "state": "online", 00:23:33.241 "raid_level": "raid5f", 00:23:33.241 "superblock": false, 00:23:33.241 "num_base_bdevs": 3, 00:23:33.241 "num_base_bdevs_discovered": 3, 00:23:33.241 "num_base_bdevs_operational": 3, 00:23:33.241 "process": { 00:23:33.241 "type": "rebuild", 00:23:33.241 "target": "spare", 00:23:33.241 "progress": { 00:23:33.241 "blocks": 59392, 00:23:33.241 "percent": 45 00:23:33.241 } 00:23:33.241 }, 00:23:33.241 "base_bdevs_list": [ 00:23:33.241 { 00:23:33.241 "name": "spare", 00:23:33.241 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:33.241 "is_configured": true, 00:23:33.241 "data_offset": 0, 00:23:33.241 "data_size": 65536 00:23:33.241 }, 00:23:33.241 { 00:23:33.241 "name": "BaseBdev2", 00:23:33.241 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:33.241 "is_configured": true, 00:23:33.241 "data_offset": 0, 00:23:33.241 "data_size": 65536 00:23:33.241 }, 00:23:33.241 { 00:23:33.241 "name": "BaseBdev3", 00:23:33.241 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:33.241 "is_configured": true, 00:23:33.241 "data_offset": 0, 00:23:33.241 "data_size": 65536 00:23:33.241 } 00:23:33.241 ] 00:23:33.241 }' 00:23:33.241 01:06:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:33.241 01:06:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.241 01:06:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:33.241 01:06:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.241 01:06:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:34.620 "name": "raid_bdev1", 00:23:34.620 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:34.620 "strip_size_kb": 64, 00:23:34.620 "state": "online", 00:23:34.620 "raid_level": "raid5f", 00:23:34.620 "superblock": false, 00:23:34.620 "num_base_bdevs": 3, 00:23:34.620 "num_base_bdevs_discovered": 3, 00:23:34.620 "num_base_bdevs_operational": 3, 00:23:34.620 "process": { 00:23:34.620 "type": "rebuild", 00:23:34.620 "target": "spare", 00:23:34.620 "progress": { 00:23:34.620 "blocks": 86016, 00:23:34.620 "percent": 65 00:23:34.620 } 00:23:34.620 }, 00:23:34.620 "base_bdevs_list": [ 00:23:34.620 { 00:23:34.620 "name": "spare", 00:23:34.620 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:34.620 "is_configured": true, 00:23:34.620 "data_offset": 0, 00:23:34.620 "data_size": 65536 00:23:34.620 }, 00:23:34.620 { 00:23:34.620 "name": "BaseBdev2", 00:23:34.620 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:34.620 "is_configured": true, 00:23:34.620 "data_offset": 0, 00:23:34.620 "data_size": 65536 00:23:34.620 }, 00:23:34.620 { 00:23:34.620 "name": "BaseBdev3", 00:23:34.620 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:34.620 "is_configured": true, 00:23:34.620 "data_offset": 0, 00:23:34.620 "data_size": 65536 00:23:34.620 } 00:23:34.620 ] 00:23:34.620 }' 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.620 01:06:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.559 01:06:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.818 01:06:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:35.818 "name": "raid_bdev1", 00:23:35.818 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:35.818 "strip_size_kb": 64, 00:23:35.818 "state": "online", 00:23:35.818 "raid_level": "raid5f", 00:23:35.818 "superblock": false, 00:23:35.818 "num_base_bdevs": 3, 00:23:35.818 "num_base_bdevs_discovered": 3, 00:23:35.818 "num_base_bdevs_operational": 3, 00:23:35.818 "process": { 00:23:35.818 "type": "rebuild", 00:23:35.818 "target": "spare", 00:23:35.818 "progress": { 00:23:35.818 "blocks": 112640, 00:23:35.818 "percent": 85 00:23:35.818 } 00:23:35.818 }, 00:23:35.818 "base_bdevs_list": [ 00:23:35.818 { 00:23:35.818 "name": "spare", 00:23:35.818 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:35.818 "is_configured": true, 00:23:35.818 "data_offset": 0, 00:23:35.818 "data_size": 65536 00:23:35.818 }, 00:23:35.818 { 00:23:35.818 "name": "BaseBdev2", 00:23:35.818 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:35.818 "is_configured": true, 00:23:35.818 "data_offset": 0, 00:23:35.818 "data_size": 65536 00:23:35.818 }, 00:23:35.818 { 00:23:35.818 "name": "BaseBdev3", 00:23:35.818 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:35.818 "is_configured": true, 00:23:35.818 "data_offset": 0, 00:23:35.818 "data_size": 65536 00:23:35.818 } 00:23:35.818 ] 00:23:35.818 }' 00:23:35.818 01:06:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:36.077 01:06:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.077 01:06:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:36.077 01:06:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.077 01:06:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:36.646 [2024-11-18 01:06:11.016342] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:36.646 [2024-11-18 01:06:11.016475] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:36.646 [2024-11-18 01:06:11.016609] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.905 01:06:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.183 01:06:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:37.183 "name": "raid_bdev1", 00:23:37.183 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:37.183 "strip_size_kb": 64, 00:23:37.183 "state": "online", 00:23:37.183 "raid_level": "raid5f", 00:23:37.183 "superblock": false, 00:23:37.183 "num_base_bdevs": 3, 00:23:37.183 "num_base_bdevs_discovered": 3, 00:23:37.183 "num_base_bdevs_operational": 3, 00:23:37.183 "base_bdevs_list": [ 00:23:37.183 { 00:23:37.183 "name": "spare", 00:23:37.183 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:37.183 "is_configured": true, 00:23:37.183 "data_offset": 0, 00:23:37.183 "data_size": 65536 00:23:37.183 }, 00:23:37.183 { 00:23:37.183 "name": "BaseBdev2", 00:23:37.183 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:37.183 "is_configured": true, 00:23:37.183 "data_offset": 0, 00:23:37.183 "data_size": 65536 00:23:37.183 }, 00:23:37.183 { 00:23:37.183 "name": "BaseBdev3", 00:23:37.183 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:37.183 "is_configured": true, 00:23:37.183 "data_offset": 0, 00:23:37.183 "data_size": 65536 00:23:37.183 } 00:23:37.183 ] 00:23:37.183 }' 00:23:37.183 01:06:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:37.183 01:06:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:37.183 01:06:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@660 -- # break 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:37.452 "name": "raid_bdev1", 00:23:37.452 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:37.452 "strip_size_kb": 64, 00:23:37.452 "state": "online", 00:23:37.452 "raid_level": "raid5f", 00:23:37.452 "superblock": false, 00:23:37.452 "num_base_bdevs": 3, 00:23:37.452 "num_base_bdevs_discovered": 3, 00:23:37.452 "num_base_bdevs_operational": 3, 00:23:37.452 "base_bdevs_list": [ 00:23:37.452 { 00:23:37.452 "name": "spare", 00:23:37.452 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:37.452 "is_configured": true, 00:23:37.452 "data_offset": 0, 00:23:37.452 "data_size": 65536 00:23:37.452 }, 00:23:37.452 { 00:23:37.452 "name": "BaseBdev2", 00:23:37.452 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:37.452 "is_configured": true, 00:23:37.452 "data_offset": 0, 00:23:37.452 "data_size": 65536 00:23:37.452 }, 00:23:37.452 { 00:23:37.452 "name": "BaseBdev3", 00:23:37.452 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:37.452 "is_configured": true, 00:23:37.452 "data_offset": 0, 00:23:37.452 "data_size": 65536 00:23:37.452 } 00:23:37.452 ] 00:23:37.452 }' 00:23:37.452 01:06:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.711 01:06:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.970 01:06:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.970 "name": "raid_bdev1", 00:23:37.970 "uuid": "25fd823f-b4c7-44c5-8c04-07e4c0d5fcfc", 00:23:37.970 "strip_size_kb": 64, 00:23:37.970 "state": "online", 00:23:37.970 "raid_level": "raid5f", 00:23:37.970 "superblock": false, 00:23:37.970 "num_base_bdevs": 3, 00:23:37.970 "num_base_bdevs_discovered": 3, 00:23:37.970 "num_base_bdevs_operational": 3, 00:23:37.970 "base_bdevs_list": [ 00:23:37.970 { 00:23:37.970 "name": "spare", 00:23:37.970 "uuid": "d38c6fb5-af57-5189-919f-dc5260d495e1", 00:23:37.970 "is_configured": true, 00:23:37.970 "data_offset": 0, 00:23:37.970 "data_size": 65536 00:23:37.970 }, 00:23:37.970 { 00:23:37.970 "name": "BaseBdev2", 00:23:37.970 "uuid": "9d5de14a-5182-459f-b3e6-60273711e3b9", 00:23:37.970 "is_configured": true, 00:23:37.970 "data_offset": 0, 00:23:37.970 "data_size": 65536 00:23:37.970 }, 00:23:37.970 { 00:23:37.970 "name": "BaseBdev3", 00:23:37.970 "uuid": "cece23b8-d89f-4851-b880-0209ac5b25c4", 00:23:37.970 "is_configured": true, 00:23:37.970 "data_offset": 0, 00:23:37.970 "data_size": 65536 00:23:37.970 } 00:23:37.970 ] 00:23:37.970 }' 00:23:37.970 01:06:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.970 01:06:12 -- common/autotest_common.sh@10 -- # set +x 00:23:38.540 01:06:12 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:38.540 [2024-11-18 01:06:12.918858] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:38.540 [2024-11-18 01:06:12.918902] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:38.540 [2024-11-18 01:06:12.919026] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:38.540 [2024-11-18 01:06:12.919139] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:38.540 [2024-11-18 01:06:12.919156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:23:38.800 01:06:12 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.800 01:06:12 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:39.059 01:06:13 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:39.059 01:06:13 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:39.059 01:06:13 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@12 -- # local i 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:39.059 01:06:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:39.335 /dev/nbd0 00:23:39.335 01:06:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:39.335 01:06:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:39.335 01:06:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:39.335 01:06:13 -- common/autotest_common.sh@867 -- # local i 00:23:39.335 01:06:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:39.335 01:06:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:39.335 01:06:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:39.335 01:06:13 -- common/autotest_common.sh@871 -- # break 00:23:39.335 01:06:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:39.335 01:06:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:39.335 01:06:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.335 1+0 records in 00:23:39.335 1+0 records out 00:23:39.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551799 s, 7.4 MB/s 00:23:39.335 01:06:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.335 01:06:13 -- common/autotest_common.sh@884 -- # size=4096 00:23:39.335 01:06:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.335 01:06:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:39.335 01:06:13 -- common/autotest_common.sh@887 -- # return 0 00:23:39.335 01:06:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.335 01:06:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:39.335 01:06:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:39.595 /dev/nbd1 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:39.595 01:06:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:39.595 01:06:13 -- common/autotest_common.sh@867 -- # local i 00:23:39.595 01:06:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:39.595 01:06:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:39.595 01:06:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:39.595 01:06:13 -- common/autotest_common.sh@871 -- # break 00:23:39.595 01:06:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:39.595 01:06:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:39.595 01:06:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.595 1+0 records in 00:23:39.595 1+0 records out 00:23:39.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753266 s, 5.4 MB/s 00:23:39.595 01:06:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.595 01:06:13 -- common/autotest_common.sh@884 -- # size=4096 00:23:39.595 01:06:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.595 01:06:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:39.595 01:06:13 -- common/autotest_common.sh@887 -- # return 0 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:39.595 01:06:13 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:39.595 01:06:13 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@51 -- # local i 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.595 01:06:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@41 -- # break 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@45 -- # return 0 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.854 01:06:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@41 -- # break 00:23:40.113 01:06:14 -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.113 01:06:14 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:40.113 01:06:14 -- bdev/bdev_raid.sh@709 -- # killprocess 138781 00:23:40.113 01:06:14 -- common/autotest_common.sh@936 -- # '[' -z 138781 ']' 00:23:40.113 01:06:14 -- common/autotest_common.sh@940 -- # kill -0 138781 00:23:40.113 01:06:14 -- common/autotest_common.sh@941 -- # uname 00:23:40.113 01:06:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:40.113 01:06:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138781 00:23:40.373 01:06:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:40.373 killing process with pid 138781 00:23:40.373 01:06:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:40.373 01:06:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138781' 00:23:40.373 Received shutdown signal, test time was about 60.000000 seconds 00:23:40.373 00:23:40.373 Latency(us) 00:23:40.373 [2024-11-18T01:06:14.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.373 [2024-11-18T01:06:14.772Z] =================================================================================================================== 00:23:40.373 [2024-11-18T01:06:14.772Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.373 01:06:14 -- common/autotest_common.sh@955 -- # kill 138781 00:23:40.373 01:06:14 -- common/autotest_common.sh@960 -- # wait 138781 00:23:40.373 [2024-11-18 01:06:14.518929] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.373 [2024-11-18 01:06:14.595061] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:40.632 01:06:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:40.632 00:23:40.632 real 0m19.445s 00:23:40.632 user 0m28.793s 00:23:40.632 sys 0m3.201s 00:23:40.632 01:06:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:40.632 01:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:40.632 ************************************ 00:23:40.632 END TEST raid5f_rebuild_test 00:23:40.632 ************************************ 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:23:40.892 01:06:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:40.892 01:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:40.892 01:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:40.892 ************************************ 00:23:40.892 START TEST raid5f_rebuild_test_sb 00:23:40.892 ************************************ 00:23:40.892 01:06:15 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@544 -- # raid_pid=139310 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139310 /var/tmp/spdk-raid.sock 00:23:40.892 01:06:15 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:40.892 01:06:15 -- common/autotest_common.sh@829 -- # '[' -z 139310 ']' 00:23:40.892 01:06:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:40.892 01:06:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.892 01:06:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:40.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:40.892 01:06:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.892 01:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:40.892 [2024-11-18 01:06:15.160852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:40.892 [2024-11-18 01:06:15.161122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139310 ] 00:23:40.892 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:40.892 Zero copy mechanism will not be used. 00:23:41.152 [2024-11-18 01:06:15.312898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.152 [2024-11-18 01:06:15.403759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.152 [2024-11-18 01:06:15.483772] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.720 01:06:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.720 01:06:16 -- common/autotest_common.sh@862 -- # return 0 00:23:41.720 01:06:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:41.720 01:06:16 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:41.720 01:06:16 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:41.979 BaseBdev1_malloc 00:23:41.979 01:06:16 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:42.238 [2024-11-18 01:06:16.475125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:42.238 [2024-11-18 01:06:16.475273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.238 [2024-11-18 01:06:16.475318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:42.238 [2024-11-18 01:06:16.475382] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.238 [2024-11-18 01:06:16.478417] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.238 [2024-11-18 01:06:16.478491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:42.238 BaseBdev1 00:23:42.238 01:06:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:42.238 01:06:16 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:42.238 01:06:16 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:42.498 BaseBdev2_malloc 00:23:42.498 01:06:16 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:42.498 [2024-11-18 01:06:16.895279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:42.498 [2024-11-18 01:06:16.895397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.498 [2024-11-18 01:06:16.895445] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:42.498 [2024-11-18 01:06:16.895493] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.498 [2024-11-18 01:06:16.898392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.498 [2024-11-18 01:06:16.898460] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:42.756 BaseBdev2 00:23:42.756 01:06:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:42.756 01:06:16 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:42.756 01:06:16 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:43.016 BaseBdev3_malloc 00:23:43.016 01:06:17 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:43.016 [2024-11-18 01:06:17.378664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:43.016 [2024-11-18 01:06:17.378780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.016 [2024-11-18 01:06:17.378826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:43.016 [2024-11-18 01:06:17.378886] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.016 [2024-11-18 01:06:17.381764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.016 [2024-11-18 01:06:17.381824] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:43.016 BaseBdev3 00:23:43.016 01:06:17 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:43.275 spare_malloc 00:23:43.275 01:06:17 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:43.535 spare_delay 00:23:43.535 01:06:17 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:43.794 [2024-11-18 01:06:17.982894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:43.794 [2024-11-18 01:06:17.983028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.795 [2024-11-18 01:06:17.983071] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:43.795 [2024-11-18 01:06:17.983131] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.795 [2024-11-18 01:06:17.986100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.795 [2024-11-18 01:06:17.986175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:43.795 spare 00:23:43.795 01:06:17 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:43.795 [2024-11-18 01:06:18.179118] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:43.795 [2024-11-18 01:06:18.181722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.795 [2024-11-18 01:06:18.181801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:43.795 [2024-11-18 01:06:18.182034] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:43.795 [2024-11-18 01:06:18.182046] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:43.795 [2024-11-18 01:06:18.182258] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:43.795 [2024-11-18 01:06:18.183055] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:43.795 [2024-11-18 01:06:18.183078] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:43.795 [2024-11-18 01:06:18.183329] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.053 01:06:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:44.053 "name": "raid_bdev1", 00:23:44.053 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:44.053 "strip_size_kb": 64, 00:23:44.053 "state": "online", 00:23:44.053 "raid_level": "raid5f", 00:23:44.053 "superblock": true, 00:23:44.053 "num_base_bdevs": 3, 00:23:44.053 "num_base_bdevs_discovered": 3, 00:23:44.053 "num_base_bdevs_operational": 3, 00:23:44.053 "base_bdevs_list": [ 00:23:44.053 { 00:23:44.053 "name": "BaseBdev1", 00:23:44.053 "uuid": "24278dc3-7921-590c-874a-63739f105f17", 00:23:44.054 "is_configured": true, 00:23:44.054 "data_offset": 2048, 00:23:44.054 "data_size": 63488 00:23:44.054 }, 00:23:44.054 { 00:23:44.054 "name": "BaseBdev2", 00:23:44.054 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:44.054 "is_configured": true, 00:23:44.054 "data_offset": 2048, 00:23:44.054 "data_size": 63488 00:23:44.054 }, 00:23:44.054 { 00:23:44.054 "name": "BaseBdev3", 00:23:44.054 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:44.054 "is_configured": true, 00:23:44.054 "data_offset": 2048, 00:23:44.054 "data_size": 63488 00:23:44.054 } 00:23:44.054 ] 00:23:44.054 }' 00:23:44.054 01:06:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:44.054 01:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:44.621 01:06:18 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:44.621 01:06:18 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:44.887 [2024-11-18 01:06:19.147698] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:44.887 01:06:19 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:23:44.887 01:06:19 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:44.887 01:06:19 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.149 01:06:19 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:45.149 01:06:19 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:45.149 01:06:19 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:45.149 01:06:19 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@12 -- # local i 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:45.149 01:06:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:45.407 [2024-11-18 01:06:19.623646] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:45.407 /dev/nbd0 00:23:45.407 01:06:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:45.407 01:06:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:45.407 01:06:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:45.407 01:06:19 -- common/autotest_common.sh@867 -- # local i 00:23:45.407 01:06:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:45.407 01:06:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:45.407 01:06:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:45.407 01:06:19 -- common/autotest_common.sh@871 -- # break 00:23:45.407 01:06:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:45.407 01:06:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:45.407 01:06:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:45.407 1+0 records in 00:23:45.407 1+0 records out 00:23:45.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030486 s, 13.4 MB/s 00:23:45.407 01:06:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:45.407 01:06:19 -- common/autotest_common.sh@884 -- # size=4096 00:23:45.407 01:06:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:45.408 01:06:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:45.408 01:06:19 -- common/autotest_common.sh@887 -- # return 0 00:23:45.408 01:06:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:45.408 01:06:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:45.408 01:06:19 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:45.408 01:06:19 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:45.408 01:06:19 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:45.408 01:06:19 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:45.976 496+0 records in 00:23:45.976 496+0 records out 00:23:45.976 65011712 bytes (65 MB, 62 MiB) copied, 0.380213 s, 171 MB/s 00:23:45.976 01:06:20 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@51 -- # local i 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:45.976 [2024-11-18 01:06:20.281243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@41 -- # break 00:23:45.976 01:06:20 -- bdev/nbd_common.sh@45 -- # return 0 00:23:45.976 01:06:20 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:46.234 [2024-11-18 01:06:20.468944] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.234 01:06:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.493 01:06:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:46.493 "name": "raid_bdev1", 00:23:46.493 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:46.493 "strip_size_kb": 64, 00:23:46.493 "state": "online", 00:23:46.493 "raid_level": "raid5f", 00:23:46.493 "superblock": true, 00:23:46.493 "num_base_bdevs": 3, 00:23:46.493 "num_base_bdevs_discovered": 2, 00:23:46.493 "num_base_bdevs_operational": 2, 00:23:46.493 "base_bdevs_list": [ 00:23:46.493 { 00:23:46.493 "name": null, 00:23:46.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.493 "is_configured": false, 00:23:46.493 "data_offset": 2048, 00:23:46.493 "data_size": 63488 00:23:46.493 }, 00:23:46.493 { 00:23:46.493 "name": "BaseBdev2", 00:23:46.493 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:46.493 "is_configured": true, 00:23:46.493 "data_offset": 2048, 00:23:46.493 "data_size": 63488 00:23:46.493 }, 00:23:46.493 { 00:23:46.493 "name": "BaseBdev3", 00:23:46.493 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:46.493 "is_configured": true, 00:23:46.493 "data_offset": 2048, 00:23:46.493 "data_size": 63488 00:23:46.493 } 00:23:46.493 ] 00:23:46.493 }' 00:23:46.493 01:06:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:46.493 01:06:20 -- common/autotest_common.sh@10 -- # set +x 00:23:47.062 01:06:21 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:47.321 [2024-11-18 01:06:21.517190] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:47.321 [2024-11-18 01:06:21.517263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.321 [2024-11-18 01:06:21.524450] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500 00:23:47.321 [2024-11-18 01:06:21.527634] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:47.321 01:06:21 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:48.259 01:06:22 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.259 01:06:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:48.259 01:06:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:48.259 01:06:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:48.259 01:06:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:48.259 01:06:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.259 01:06:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.518 01:06:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:48.518 "name": "raid_bdev1", 00:23:48.518 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:48.518 "strip_size_kb": 64, 00:23:48.518 "state": "online", 00:23:48.518 "raid_level": "raid5f", 00:23:48.518 "superblock": true, 00:23:48.518 "num_base_bdevs": 3, 00:23:48.518 "num_base_bdevs_discovered": 3, 00:23:48.518 "num_base_bdevs_operational": 3, 00:23:48.518 "process": { 00:23:48.518 "type": "rebuild", 00:23:48.518 "target": "spare", 00:23:48.518 "progress": { 00:23:48.518 "blocks": 24576, 00:23:48.518 "percent": 19 00:23:48.518 } 00:23:48.518 }, 00:23:48.518 "base_bdevs_list": [ 00:23:48.518 { 00:23:48.518 "name": "spare", 00:23:48.518 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:48.518 "is_configured": true, 00:23:48.518 "data_offset": 2048, 00:23:48.518 "data_size": 63488 00:23:48.518 }, 00:23:48.518 { 00:23:48.518 "name": "BaseBdev2", 00:23:48.518 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:48.518 "is_configured": true, 00:23:48.518 "data_offset": 2048, 00:23:48.518 "data_size": 63488 00:23:48.518 }, 00:23:48.518 { 00:23:48.518 "name": "BaseBdev3", 00:23:48.518 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:48.518 "is_configured": true, 00:23:48.518 "data_offset": 2048, 00:23:48.518 "data_size": 63488 00:23:48.518 } 00:23:48.518 ] 00:23:48.518 }' 00:23:48.518 01:06:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:48.518 01:06:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.518 01:06:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:48.518 01:06:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.518 01:06:22 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:48.778 [2024-11-18 01:06:23.061243] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.778 [2024-11-18 01:06:23.145304] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:48.778 [2024-11-18 01:06:23.145434] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.778 01:06:23 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:48.778 01:06:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:48.778 01:06:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:48.778 01:06:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:48.778 01:06:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.037 01:06:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:49.037 01:06:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.037 01:06:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.037 01:06:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.037 01:06:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.037 01:06:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.037 01:06:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.296 01:06:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.296 "name": "raid_bdev1", 00:23:49.296 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:49.296 "strip_size_kb": 64, 00:23:49.296 "state": "online", 00:23:49.296 "raid_level": "raid5f", 00:23:49.296 "superblock": true, 00:23:49.296 "num_base_bdevs": 3, 00:23:49.296 "num_base_bdevs_discovered": 2, 00:23:49.296 "num_base_bdevs_operational": 2, 00:23:49.296 "base_bdevs_list": [ 00:23:49.296 { 00:23:49.296 "name": null, 00:23:49.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.296 "is_configured": false, 00:23:49.296 "data_offset": 2048, 00:23:49.296 "data_size": 63488 00:23:49.296 }, 00:23:49.296 { 00:23:49.296 "name": "BaseBdev2", 00:23:49.296 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:49.296 "is_configured": true, 00:23:49.296 "data_offset": 2048, 00:23:49.296 "data_size": 63488 00:23:49.296 }, 00:23:49.296 { 00:23:49.296 "name": "BaseBdev3", 00:23:49.296 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:49.296 "is_configured": true, 00:23:49.296 "data_offset": 2048, 00:23:49.296 "data_size": 63488 00:23:49.296 } 00:23:49.296 ] 00:23:49.296 }' 00:23:49.296 01:06:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.296 01:06:23 -- common/autotest_common.sh@10 -- # set +x 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.864 "name": "raid_bdev1", 00:23:49.864 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:49.864 "strip_size_kb": 64, 00:23:49.864 "state": "online", 00:23:49.864 "raid_level": "raid5f", 00:23:49.864 "superblock": true, 00:23:49.864 "num_base_bdevs": 3, 00:23:49.864 "num_base_bdevs_discovered": 2, 00:23:49.864 "num_base_bdevs_operational": 2, 00:23:49.864 "base_bdevs_list": [ 00:23:49.864 { 00:23:49.864 "name": null, 00:23:49.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.864 "is_configured": false, 00:23:49.864 "data_offset": 2048, 00:23:49.864 "data_size": 63488 00:23:49.864 }, 00:23:49.864 { 00:23:49.864 "name": "BaseBdev2", 00:23:49.864 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:49.864 "is_configured": true, 00:23:49.864 "data_offset": 2048, 00:23:49.864 "data_size": 63488 00:23:49.864 }, 00:23:49.864 { 00:23:49.864 "name": "BaseBdev3", 00:23:49.864 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:49.864 "is_configured": true, 00:23:49.864 "data_offset": 2048, 00:23:49.864 "data_size": 63488 00:23:49.864 } 00:23:49.864 ] 00:23:49.864 }' 00:23:49.864 01:06:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:50.122 01:06:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:50.122 01:06:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:50.122 01:06:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:50.122 01:06:24 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:50.380 [2024-11-18 01:06:24.528033] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:50.380 [2024-11-18 01:06:24.528102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.380 [2024-11-18 01:06:24.535110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:23:50.380 [2024-11-18 01:06:24.537945] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:50.380 01:06:24 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:51.337 01:06:25 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.337 01:06:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:51.337 01:06:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:51.337 01:06:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:51.337 01:06:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:51.337 01:06:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.337 01:06:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.595 01:06:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:51.595 "name": "raid_bdev1", 00:23:51.595 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:51.595 "strip_size_kb": 64, 00:23:51.595 "state": "online", 00:23:51.595 "raid_level": "raid5f", 00:23:51.595 "superblock": true, 00:23:51.595 "num_base_bdevs": 3, 00:23:51.595 "num_base_bdevs_discovered": 3, 00:23:51.595 "num_base_bdevs_operational": 3, 00:23:51.595 "process": { 00:23:51.595 "type": "rebuild", 00:23:51.595 "target": "spare", 00:23:51.595 "progress": { 00:23:51.595 "blocks": 24576, 00:23:51.595 "percent": 19 00:23:51.595 } 00:23:51.595 }, 00:23:51.595 "base_bdevs_list": [ 00:23:51.595 { 00:23:51.595 "name": "spare", 00:23:51.595 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:51.595 "is_configured": true, 00:23:51.595 "data_offset": 2048, 00:23:51.595 "data_size": 63488 00:23:51.595 }, 00:23:51.595 { 00:23:51.595 "name": "BaseBdev2", 00:23:51.595 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:51.595 "is_configured": true, 00:23:51.595 "data_offset": 2048, 00:23:51.595 "data_size": 63488 00:23:51.595 }, 00:23:51.595 { 00:23:51.595 "name": "BaseBdev3", 00:23:51.595 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:51.595 "is_configured": true, 00:23:51.595 "data_offset": 2048, 00:23:51.595 "data_size": 63488 00:23:51.595 } 00:23:51.595 ] 00:23:51.595 }' 00:23:51.595 01:06:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:51.595 01:06:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.595 01:06:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:51.596 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@657 -- # local timeout=588 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.596 01:06:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.854 01:06:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:51.854 "name": "raid_bdev1", 00:23:51.854 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:51.854 "strip_size_kb": 64, 00:23:51.854 "state": "online", 00:23:51.854 "raid_level": "raid5f", 00:23:51.854 "superblock": true, 00:23:51.854 "num_base_bdevs": 3, 00:23:51.854 "num_base_bdevs_discovered": 3, 00:23:51.854 "num_base_bdevs_operational": 3, 00:23:51.854 "process": { 00:23:51.854 "type": "rebuild", 00:23:51.854 "target": "spare", 00:23:51.854 "progress": { 00:23:51.854 "blocks": 30720, 00:23:51.854 "percent": 24 00:23:51.854 } 00:23:51.854 }, 00:23:51.854 "base_bdevs_list": [ 00:23:51.854 { 00:23:51.854 "name": "spare", 00:23:51.854 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:51.854 "is_configured": true, 00:23:51.854 "data_offset": 2048, 00:23:51.854 "data_size": 63488 00:23:51.854 }, 00:23:51.854 { 00:23:51.854 "name": "BaseBdev2", 00:23:51.854 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:51.854 "is_configured": true, 00:23:51.854 "data_offset": 2048, 00:23:51.854 "data_size": 63488 00:23:51.854 }, 00:23:51.854 { 00:23:51.854 "name": "BaseBdev3", 00:23:51.854 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:51.854 "is_configured": true, 00:23:51.854 "data_offset": 2048, 00:23:51.854 "data_size": 63488 00:23:51.854 } 00:23:51.854 ] 00:23:51.854 }' 00:23:51.854 01:06:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:51.854 01:06:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.854 01:06:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:51.854 01:06:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.854 01:06:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.232 "name": "raid_bdev1", 00:23:53.232 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:53.232 "strip_size_kb": 64, 00:23:53.232 "state": "online", 00:23:53.232 "raid_level": "raid5f", 00:23:53.232 "superblock": true, 00:23:53.232 "num_base_bdevs": 3, 00:23:53.232 "num_base_bdevs_discovered": 3, 00:23:53.232 "num_base_bdevs_operational": 3, 00:23:53.232 "process": { 00:23:53.232 "type": "rebuild", 00:23:53.232 "target": "spare", 00:23:53.232 "progress": { 00:23:53.232 "blocks": 57344, 00:23:53.232 "percent": 45 00:23:53.232 } 00:23:53.232 }, 00:23:53.232 "base_bdevs_list": [ 00:23:53.232 { 00:23:53.232 "name": "spare", 00:23:53.232 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:53.232 "is_configured": true, 00:23:53.232 "data_offset": 2048, 00:23:53.232 "data_size": 63488 00:23:53.232 }, 00:23:53.232 { 00:23:53.232 "name": "BaseBdev2", 00:23:53.232 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:53.232 "is_configured": true, 00:23:53.232 "data_offset": 2048, 00:23:53.232 "data_size": 63488 00:23:53.232 }, 00:23:53.232 { 00:23:53.232 "name": "BaseBdev3", 00:23:53.232 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:53.232 "is_configured": true, 00:23:53.232 "data_offset": 2048, 00:23:53.232 "data_size": 63488 00:23:53.232 } 00:23:53.232 ] 00:23:53.232 }' 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.232 01:06:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.168 01:06:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.427 01:06:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:54.427 "name": "raid_bdev1", 00:23:54.427 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:54.427 "strip_size_kb": 64, 00:23:54.427 "state": "online", 00:23:54.427 "raid_level": "raid5f", 00:23:54.427 "superblock": true, 00:23:54.427 "num_base_bdevs": 3, 00:23:54.427 "num_base_bdevs_discovered": 3, 00:23:54.427 "num_base_bdevs_operational": 3, 00:23:54.427 "process": { 00:23:54.427 "type": "rebuild", 00:23:54.427 "target": "spare", 00:23:54.427 "progress": { 00:23:54.427 "blocks": 86016, 00:23:54.427 "percent": 67 00:23:54.427 } 00:23:54.427 }, 00:23:54.427 "base_bdevs_list": [ 00:23:54.427 { 00:23:54.427 "name": "spare", 00:23:54.427 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:54.427 "is_configured": true, 00:23:54.427 "data_offset": 2048, 00:23:54.427 "data_size": 63488 00:23:54.427 }, 00:23:54.427 { 00:23:54.427 "name": "BaseBdev2", 00:23:54.427 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:54.427 "is_configured": true, 00:23:54.427 "data_offset": 2048, 00:23:54.427 "data_size": 63488 00:23:54.427 }, 00:23:54.427 { 00:23:54.427 "name": "BaseBdev3", 00:23:54.427 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:54.427 "is_configured": true, 00:23:54.427 "data_offset": 2048, 00:23:54.427 "data_size": 63488 00:23:54.427 } 00:23:54.427 ] 00:23:54.427 }' 00:23:54.427 01:06:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:54.685 01:06:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.685 01:06:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:54.685 01:06:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.685 01:06:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.620 01:06:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.879 01:06:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.879 "name": "raid_bdev1", 00:23:55.879 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:55.879 "strip_size_kb": 64, 00:23:55.879 "state": "online", 00:23:55.879 "raid_level": "raid5f", 00:23:55.879 "superblock": true, 00:23:55.879 "num_base_bdevs": 3, 00:23:55.879 "num_base_bdevs_discovered": 3, 00:23:55.879 "num_base_bdevs_operational": 3, 00:23:55.879 "process": { 00:23:55.879 "type": "rebuild", 00:23:55.879 "target": "spare", 00:23:55.879 "progress": { 00:23:55.879 "blocks": 112640, 00:23:55.879 "percent": 88 00:23:55.879 } 00:23:55.879 }, 00:23:55.879 "base_bdevs_list": [ 00:23:55.879 { 00:23:55.879 "name": "spare", 00:23:55.879 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:55.879 "is_configured": true, 00:23:55.879 "data_offset": 2048, 00:23:55.879 "data_size": 63488 00:23:55.879 }, 00:23:55.879 { 00:23:55.879 "name": "BaseBdev2", 00:23:55.879 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:55.879 "is_configured": true, 00:23:55.879 "data_offset": 2048, 00:23:55.879 "data_size": 63488 00:23:55.879 }, 00:23:55.879 { 00:23:55.879 "name": "BaseBdev3", 00:23:55.879 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:55.879 "is_configured": true, 00:23:55.879 "data_offset": 2048, 00:23:55.879 "data_size": 63488 00:23:55.879 } 00:23:55.879 ] 00:23:55.879 }' 00:23:55.879 01:06:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.879 01:06:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.879 01:06:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.879 01:06:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.879 01:06:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:56.445 [2024-11-18 01:06:30.804619] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:56.446 [2024-11-18 01:06:30.804733] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:56.446 [2024-11-18 01:06:30.804917] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.011 01:06:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.270 "name": "raid_bdev1", 00:23:57.270 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:57.270 "strip_size_kb": 64, 00:23:57.270 "state": "online", 00:23:57.270 "raid_level": "raid5f", 00:23:57.270 "superblock": true, 00:23:57.270 "num_base_bdevs": 3, 00:23:57.270 "num_base_bdevs_discovered": 3, 00:23:57.270 "num_base_bdevs_operational": 3, 00:23:57.270 "base_bdevs_list": [ 00:23:57.270 { 00:23:57.270 "name": "spare", 00:23:57.270 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:57.270 "is_configured": true, 00:23:57.270 "data_offset": 2048, 00:23:57.270 "data_size": 63488 00:23:57.270 }, 00:23:57.270 { 00:23:57.270 "name": "BaseBdev2", 00:23:57.270 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:57.270 "is_configured": true, 00:23:57.270 "data_offset": 2048, 00:23:57.270 "data_size": 63488 00:23:57.270 }, 00:23:57.270 { 00:23:57.270 "name": "BaseBdev3", 00:23:57.270 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:57.270 "is_configured": true, 00:23:57.270 "data_offset": 2048, 00:23:57.270 "data_size": 63488 00:23:57.270 } 00:23:57.270 ] 00:23:57.270 }' 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@660 -- # break 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.270 01:06:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.529 "name": "raid_bdev1", 00:23:57.529 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:57.529 "strip_size_kb": 64, 00:23:57.529 "state": "online", 00:23:57.529 "raid_level": "raid5f", 00:23:57.529 "superblock": true, 00:23:57.529 "num_base_bdevs": 3, 00:23:57.529 "num_base_bdevs_discovered": 3, 00:23:57.529 "num_base_bdevs_operational": 3, 00:23:57.529 "base_bdevs_list": [ 00:23:57.529 { 00:23:57.529 "name": "spare", 00:23:57.529 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:57.529 "is_configured": true, 00:23:57.529 "data_offset": 2048, 00:23:57.529 "data_size": 63488 00:23:57.529 }, 00:23:57.529 { 00:23:57.529 "name": "BaseBdev2", 00:23:57.529 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:57.529 "is_configured": true, 00:23:57.529 "data_offset": 2048, 00:23:57.529 "data_size": 63488 00:23:57.529 }, 00:23:57.529 { 00:23:57.529 "name": "BaseBdev3", 00:23:57.529 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:57.529 "is_configured": true, 00:23:57.529 "data_offset": 2048, 00:23:57.529 "data_size": 63488 00:23:57.529 } 00:23:57.529 ] 00:23:57.529 }' 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.529 01:06:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.530 01:06:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.788 01:06:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.788 "name": "raid_bdev1", 00:23:57.788 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:23:57.788 "strip_size_kb": 64, 00:23:57.788 "state": "online", 00:23:57.788 "raid_level": "raid5f", 00:23:57.788 "superblock": true, 00:23:57.788 "num_base_bdevs": 3, 00:23:57.788 "num_base_bdevs_discovered": 3, 00:23:57.788 "num_base_bdevs_operational": 3, 00:23:57.788 "base_bdevs_list": [ 00:23:57.788 { 00:23:57.788 "name": "spare", 00:23:57.788 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:23:57.788 "is_configured": true, 00:23:57.788 "data_offset": 2048, 00:23:57.788 "data_size": 63488 00:23:57.788 }, 00:23:57.788 { 00:23:57.788 "name": "BaseBdev2", 00:23:57.788 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:23:57.788 "is_configured": true, 00:23:57.788 "data_offset": 2048, 00:23:57.788 "data_size": 63488 00:23:57.788 }, 00:23:57.788 { 00:23:57.788 "name": "BaseBdev3", 00:23:57.788 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:23:57.788 "is_configured": true, 00:23:57.788 "data_offset": 2048, 00:23:57.788 "data_size": 63488 00:23:57.788 } 00:23:57.788 ] 00:23:57.788 }' 00:23:57.788 01:06:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.788 01:06:32 -- common/autotest_common.sh@10 -- # set +x 00:23:58.355 01:06:32 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:58.614 [2024-11-18 01:06:32.913996] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:58.614 [2024-11-18 01:06:32.914041] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:58.614 [2024-11-18 01:06:32.914196] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:58.614 [2024-11-18 01:06:32.914315] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:58.614 [2024-11-18 01:06:32.914327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:58.614 01:06:32 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.614 01:06:32 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:58.873 01:06:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:58.873 01:06:33 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:58.873 01:06:33 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@12 -- # local i 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:58.873 01:06:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:59.132 /dev/nbd0 00:23:59.132 01:06:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:59.132 01:06:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:59.132 01:06:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:59.132 01:06:33 -- common/autotest_common.sh@867 -- # local i 00:23:59.132 01:06:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:59.132 01:06:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:59.132 01:06:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:59.132 01:06:33 -- common/autotest_common.sh@871 -- # break 00:23:59.132 01:06:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:59.132 01:06:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:59.132 01:06:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.132 1+0 records in 00:23:59.132 1+0 records out 00:23:59.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220161 s, 18.6 MB/s 00:23:59.132 01:06:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.132 01:06:33 -- common/autotest_common.sh@884 -- # size=4096 00:23:59.132 01:06:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.132 01:06:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:59.132 01:06:33 -- common/autotest_common.sh@887 -- # return 0 00:23:59.132 01:06:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.132 01:06:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.132 01:06:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:59.391 /dev/nbd1 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:59.391 01:06:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:59.391 01:06:33 -- common/autotest_common.sh@867 -- # local i 00:23:59.391 01:06:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:59.391 01:06:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:59.391 01:06:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:59.391 01:06:33 -- common/autotest_common.sh@871 -- # break 00:23:59.391 01:06:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:59.391 01:06:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:59.391 01:06:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.391 1+0 records in 00:23:59.391 1+0 records out 00:23:59.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382093 s, 10.7 MB/s 00:23:59.391 01:06:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.391 01:06:33 -- common/autotest_common.sh@884 -- # size=4096 00:23:59.391 01:06:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.391 01:06:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:59.391 01:06:33 -- common/autotest_common.sh@887 -- # return 0 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.391 01:06:33 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:59.391 01:06:33 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@51 -- # local i 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.391 01:06:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:59.650 01:06:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:59.650 01:06:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:59.650 01:06:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:59.650 01:06:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.650 01:06:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.650 01:06:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.650 01:06:34 -- bdev/nbd_common.sh@41 -- # break 00:23:59.650 01:06:34 -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.650 01:06:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.650 01:06:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@41 -- # break 00:23:59.909 01:06:34 -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.909 01:06:34 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:59.909 01:06:34 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:59.909 01:06:34 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:59.909 01:06:34 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:00.169 01:06:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:00.428 [2024-11-18 01:06:34.643423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:00.428 [2024-11-18 01:06:34.643793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.428 [2024-11-18 01:06:34.643902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:00.428 [2024-11-18 01:06:34.644052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.428 [2024-11-18 01:06:34.646884] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.428 [2024-11-18 01:06:34.647080] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:00.428 [2024-11-18 01:06:34.647286] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:00.428 [2024-11-18 01:06:34.647455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:00.428 BaseBdev1 00:24:00.428 01:06:34 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:00.428 01:06:34 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:00.428 01:06:34 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:00.687 01:06:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:00.687 [2024-11-18 01:06:35.087776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:00.687 [2024-11-18 01:06:35.088058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.687 [2024-11-18 01:06:35.088148] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:00.687 [2024-11-18 01:06:35.088349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.687 [2024-11-18 01:06:35.088922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.687 [2024-11-18 01:06:35.089100] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:00.687 [2024-11-18 01:06:35.089292] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:00.687 [2024-11-18 01:06:35.089389] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:00.687 [2024-11-18 01:06:35.089491] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.687 [2024-11-18 01:06:35.089632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:24:00.687 [2024-11-18 01:06:35.089792] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:00.946 BaseBdev2 00:24:00.946 01:06:35 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:00.946 01:06:35 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:00.946 01:06:35 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:00.946 01:06:35 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:01.205 [2024-11-18 01:06:35.455864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:01.205 [2024-11-18 01:06:35.456243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.206 [2024-11-18 01:06:35.456332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:01.206 [2024-11-18 01:06:35.456449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.206 [2024-11-18 01:06:35.457029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.206 [2024-11-18 01:06:35.457208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:01.206 [2024-11-18 01:06:35.457398] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:01.206 [2024-11-18 01:06:35.457516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:01.206 BaseBdev3 00:24:01.206 01:06:35 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:01.465 [2024-11-18 01:06:35.823924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:01.465 [2024-11-18 01:06:35.824279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.465 [2024-11-18 01:06:35.824368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:01.465 [2024-11-18 01:06:35.824478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.465 [2024-11-18 01:06:35.825086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.465 [2024-11-18 01:06:35.825301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:01.465 [2024-11-18 01:06:35.825499] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:01.465 [2024-11-18 01:06:35.825628] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.465 spare 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.465 01:06:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.724 [2024-11-18 01:06:35.925809] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:24:01.724 [2024-11-18 01:06:35.926070] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:01.724 [2024-11-18 01:06:35.926346] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044230 00:24:01.724 [2024-11-18 01:06:35.927243] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:24:01.724 [2024-11-18 01:06:35.927372] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:24:01.724 [2024-11-18 01:06:35.927669] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.724 01:06:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:01.724 "name": "raid_bdev1", 00:24:01.724 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:24:01.724 "strip_size_kb": 64, 00:24:01.724 "state": "online", 00:24:01.724 "raid_level": "raid5f", 00:24:01.724 "superblock": true, 00:24:01.724 "num_base_bdevs": 3, 00:24:01.724 "num_base_bdevs_discovered": 3, 00:24:01.724 "num_base_bdevs_operational": 3, 00:24:01.724 "base_bdevs_list": [ 00:24:01.724 { 00:24:01.724 "name": "spare", 00:24:01.724 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:24:01.724 "is_configured": true, 00:24:01.724 "data_offset": 2048, 00:24:01.724 "data_size": 63488 00:24:01.724 }, 00:24:01.724 { 00:24:01.724 "name": "BaseBdev2", 00:24:01.724 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:24:01.724 "is_configured": true, 00:24:01.724 "data_offset": 2048, 00:24:01.724 "data_size": 63488 00:24:01.724 }, 00:24:01.724 { 00:24:01.724 "name": "BaseBdev3", 00:24:01.724 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:24:01.724 "is_configured": true, 00:24:01.724 "data_offset": 2048, 00:24:01.724 "data_size": 63488 00:24:01.724 } 00:24:01.724 ] 00:24:01.724 }' 00:24:01.724 01:06:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:01.724 01:06:36 -- common/autotest_common.sh@10 -- # set +x 00:24:02.293 01:06:36 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:02.293 01:06:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.293 01:06:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:02.293 01:06:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:02.293 01:06:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.293 01:06:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.293 01:06:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.552 01:06:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.552 "name": "raid_bdev1", 00:24:02.552 "uuid": "9c6048e8-dc10-4f0b-ba4f-4b06761e8b19", 00:24:02.552 "strip_size_kb": 64, 00:24:02.552 "state": "online", 00:24:02.552 "raid_level": "raid5f", 00:24:02.552 "superblock": true, 00:24:02.552 "num_base_bdevs": 3, 00:24:02.552 "num_base_bdevs_discovered": 3, 00:24:02.552 "num_base_bdevs_operational": 3, 00:24:02.552 "base_bdevs_list": [ 00:24:02.552 { 00:24:02.552 "name": "spare", 00:24:02.552 "uuid": "c55c8c4a-50ec-5942-9871-00c9801def6b", 00:24:02.552 "is_configured": true, 00:24:02.552 "data_offset": 2048, 00:24:02.552 "data_size": 63488 00:24:02.552 }, 00:24:02.552 { 00:24:02.552 "name": "BaseBdev2", 00:24:02.552 "uuid": "58c37d7b-9a40-5f6c-9a3b-36959f1b42f0", 00:24:02.552 "is_configured": true, 00:24:02.552 "data_offset": 2048, 00:24:02.552 "data_size": 63488 00:24:02.552 }, 00:24:02.552 { 00:24:02.552 "name": "BaseBdev3", 00:24:02.552 "uuid": "806c552d-e238-5927-b339-73671e80ada1", 00:24:02.552 "is_configured": true, 00:24:02.552 "data_offset": 2048, 00:24:02.552 "data_size": 63488 00:24:02.552 } 00:24:02.552 ] 00:24:02.552 }' 00:24:02.552 01:06:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.552 01:06:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:02.552 01:06:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.552 01:06:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:02.552 01:06:36 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:02.552 01:06:36 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.810 01:06:37 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.810 01:06:37 -- bdev/bdev_raid.sh@709 -- # killprocess 139310 00:24:02.810 01:06:37 -- common/autotest_common.sh@936 -- # '[' -z 139310 ']' 00:24:02.810 01:06:37 -- common/autotest_common.sh@940 -- # kill -0 139310 00:24:02.810 01:06:37 -- common/autotest_common.sh@941 -- # uname 00:24:02.810 01:06:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:02.810 01:06:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139310 00:24:03.069 01:06:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:03.069 01:06:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:03.069 01:06:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139310' 00:24:03.069 killing process with pid 139310 00:24:03.069 01:06:37 -- common/autotest_common.sh@955 -- # kill 139310 00:24:03.069 Received shutdown signal, test time was about 60.000000 seconds 00:24:03.069 00:24:03.069 Latency(us) 00:24:03.069 [2024-11-18T01:06:37.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.069 [2024-11-18T01:06:37.468Z] =================================================================================================================== 00:24:03.069 [2024-11-18T01:06:37.468Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.069 01:06:37 -- common/autotest_common.sh@960 -- # wait 139310 00:24:03.069 [2024-11-18 01:06:37.219723] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:03.069 [2024-11-18 01:06:37.219833] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:03.069 [2024-11-18 01:06:37.219938] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:03.069 [2024-11-18 01:06:37.219951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:24:03.069 [2024-11-18 01:06:37.294681] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:03.328 01:06:37 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:03.328 00:24:03.328 real 0m22.615s 00:24:03.328 user 0m34.425s 00:24:03.328 sys 0m3.900s 00:24:03.328 01:06:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:03.328 01:06:37 -- common/autotest_common.sh@10 -- # set +x 00:24:03.328 ************************************ 00:24:03.328 END TEST raid5f_rebuild_test_sb 00:24:03.328 ************************************ 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:03.588 01:06:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:03.588 01:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:03.588 01:06:37 -- common/autotest_common.sh@10 -- # set +x 00:24:03.588 ************************************ 00:24:03.588 START TEST raid5f_state_function_test 00:24:03.588 ************************************ 00:24:03.588 01:06:37 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=139924 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139924' 00:24:03.588 Process raid pid: 139924 00:24:03.588 01:06:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139924 /var/tmp/spdk-raid.sock 00:24:03.588 01:06:37 -- common/autotest_common.sh@829 -- # '[' -z 139924 ']' 00:24:03.588 01:06:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:03.588 01:06:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.588 01:06:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:03.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:03.588 01:06:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.588 01:06:37 -- common/autotest_common.sh@10 -- # set +x 00:24:03.588 [2024-11-18 01:06:37.840337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:03.588 [2024-11-18 01:06:37.840783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.588 [2024-11-18 01:06:37.983650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.848 [2024-11-18 01:06:38.065654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.848 [2024-11-18 01:06:38.145899] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:04.415 01:06:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.415 01:06:38 -- common/autotest_common.sh@862 -- # return 0 00:24:04.415 01:06:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:04.674 [2024-11-18 01:06:39.036147] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:04.674 [2024-11-18 01:06:39.036475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:04.674 [2024-11-18 01:06:39.036617] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:04.674 [2024-11-18 01:06:39.036682] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:04.674 [2024-11-18 01:06:39.036797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:04.674 [2024-11-18 01:06:39.036881] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:04.674 [2024-11-18 01:06:39.036912] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:04.674 [2024-11-18 01:06:39.037053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.674 01:06:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.939 01:06:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.939 "name": "Existed_Raid", 00:24:04.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.939 "strip_size_kb": 64, 00:24:04.939 "state": "configuring", 00:24:04.939 "raid_level": "raid5f", 00:24:04.939 "superblock": false, 00:24:04.939 "num_base_bdevs": 4, 00:24:04.939 "num_base_bdevs_discovered": 0, 00:24:04.939 "num_base_bdevs_operational": 4, 00:24:04.939 "base_bdevs_list": [ 00:24:04.939 { 00:24:04.939 "name": "BaseBdev1", 00:24:04.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.939 "is_configured": false, 00:24:04.939 "data_offset": 0, 00:24:04.939 "data_size": 0 00:24:04.939 }, 00:24:04.939 { 00:24:04.939 "name": "BaseBdev2", 00:24:04.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.939 "is_configured": false, 00:24:04.939 "data_offset": 0, 00:24:04.939 "data_size": 0 00:24:04.939 }, 00:24:04.939 { 00:24:04.939 "name": "BaseBdev3", 00:24:04.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.939 "is_configured": false, 00:24:04.939 "data_offset": 0, 00:24:04.939 "data_size": 0 00:24:04.939 }, 00:24:04.939 { 00:24:04.939 "name": "BaseBdev4", 00:24:04.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.939 "is_configured": false, 00:24:04.940 "data_offset": 0, 00:24:04.940 "data_size": 0 00:24:04.940 } 00:24:04.940 ] 00:24:04.940 }' 00:24:04.940 01:06:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.940 01:06:39 -- common/autotest_common.sh@10 -- # set +x 00:24:05.511 01:06:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:05.770 [2024-11-18 01:06:39.984158] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:05.770 [2024-11-18 01:06:39.984386] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:24:05.770 01:06:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:06.031 [2024-11-18 01:06:40.256315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:06.031 [2024-11-18 01:06:40.256652] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:06.031 [2024-11-18 01:06:40.256760] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:06.031 [2024-11-18 01:06:40.256827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:06.031 [2024-11-18 01:06:40.256919] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:06.031 [2024-11-18 01:06:40.256972] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:06.031 [2024-11-18 01:06:40.256998] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:06.031 [2024-11-18 01:06:40.257102] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:06.031 01:06:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:06.290 [2024-11-18 01:06:40.508430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.290 BaseBdev1 00:24:06.290 01:06:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:06.290 01:06:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:06.290 01:06:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:06.290 01:06:40 -- common/autotest_common.sh@899 -- # local i 00:24:06.290 01:06:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:06.290 01:06:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:06.290 01:06:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.549 01:06:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:06.549 [ 00:24:06.549 { 00:24:06.549 "name": "BaseBdev1", 00:24:06.549 "aliases": [ 00:24:06.549 "2b5238b6-fc36-41c5-b27a-9d9e5759c7ed" 00:24:06.549 ], 00:24:06.549 "product_name": "Malloc disk", 00:24:06.549 "block_size": 512, 00:24:06.549 "num_blocks": 65536, 00:24:06.549 "uuid": "2b5238b6-fc36-41c5-b27a-9d9e5759c7ed", 00:24:06.549 "assigned_rate_limits": { 00:24:06.549 "rw_ios_per_sec": 0, 00:24:06.549 "rw_mbytes_per_sec": 0, 00:24:06.549 "r_mbytes_per_sec": 0, 00:24:06.549 "w_mbytes_per_sec": 0 00:24:06.549 }, 00:24:06.549 "claimed": true, 00:24:06.549 "claim_type": "exclusive_write", 00:24:06.549 "zoned": false, 00:24:06.549 "supported_io_types": { 00:24:06.549 "read": true, 00:24:06.549 "write": true, 00:24:06.549 "unmap": true, 00:24:06.549 "write_zeroes": true, 00:24:06.549 "flush": true, 00:24:06.549 "reset": true, 00:24:06.549 "compare": false, 00:24:06.549 "compare_and_write": false, 00:24:06.549 "abort": true, 00:24:06.549 "nvme_admin": false, 00:24:06.549 "nvme_io": false 00:24:06.549 }, 00:24:06.549 "memory_domains": [ 00:24:06.549 { 00:24:06.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.549 "dma_device_type": 2 00:24:06.549 } 00:24:06.549 ], 00:24:06.549 "driver_specific": {} 00:24:06.549 } 00:24:06.549 ] 00:24:06.549 01:06:40 -- common/autotest_common.sh@905 -- # return 0 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.549 01:06:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.808 01:06:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:06.808 "name": "Existed_Raid", 00:24:06.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.808 "strip_size_kb": 64, 00:24:06.808 "state": "configuring", 00:24:06.808 "raid_level": "raid5f", 00:24:06.808 "superblock": false, 00:24:06.808 "num_base_bdevs": 4, 00:24:06.808 "num_base_bdevs_discovered": 1, 00:24:06.808 "num_base_bdevs_operational": 4, 00:24:06.808 "base_bdevs_list": [ 00:24:06.808 { 00:24:06.808 "name": "BaseBdev1", 00:24:06.808 "uuid": "2b5238b6-fc36-41c5-b27a-9d9e5759c7ed", 00:24:06.808 "is_configured": true, 00:24:06.808 "data_offset": 0, 00:24:06.808 "data_size": 65536 00:24:06.808 }, 00:24:06.808 { 00:24:06.808 "name": "BaseBdev2", 00:24:06.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.808 "is_configured": false, 00:24:06.808 "data_offset": 0, 00:24:06.808 "data_size": 0 00:24:06.808 }, 00:24:06.808 { 00:24:06.808 "name": "BaseBdev3", 00:24:06.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.808 "is_configured": false, 00:24:06.808 "data_offset": 0, 00:24:06.808 "data_size": 0 00:24:06.808 }, 00:24:06.808 { 00:24:06.808 "name": "BaseBdev4", 00:24:06.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.808 "is_configured": false, 00:24:06.808 "data_offset": 0, 00:24:06.808 "data_size": 0 00:24:06.808 } 00:24:06.808 ] 00:24:06.808 }' 00:24:06.808 01:06:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:06.808 01:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:07.744 01:06:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:07.744 [2024-11-18 01:06:41.984760] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:07.744 [2024-11-18 01:06:41.985092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:24:07.744 01:06:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:07.744 01:06:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:08.003 [2024-11-18 01:06:42.180922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:08.003 [2024-11-18 01:06:42.183956] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:08.003 [2024-11-18 01:06:42.184205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:08.003 [2024-11-18 01:06:42.184330] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:08.003 [2024-11-18 01:06:42.184412] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:08.003 [2024-11-18 01:06:42.184559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:08.004 [2024-11-18 01:06:42.184630] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.004 01:06:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.263 01:06:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:08.263 "name": "Existed_Raid", 00:24:08.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.263 "strip_size_kb": 64, 00:24:08.263 "state": "configuring", 00:24:08.263 "raid_level": "raid5f", 00:24:08.263 "superblock": false, 00:24:08.263 "num_base_bdevs": 4, 00:24:08.263 "num_base_bdevs_discovered": 1, 00:24:08.263 "num_base_bdevs_operational": 4, 00:24:08.263 "base_bdevs_list": [ 00:24:08.263 { 00:24:08.263 "name": "BaseBdev1", 00:24:08.263 "uuid": "2b5238b6-fc36-41c5-b27a-9d9e5759c7ed", 00:24:08.263 "is_configured": true, 00:24:08.263 "data_offset": 0, 00:24:08.263 "data_size": 65536 00:24:08.263 }, 00:24:08.263 { 00:24:08.263 "name": "BaseBdev2", 00:24:08.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.263 "is_configured": false, 00:24:08.263 "data_offset": 0, 00:24:08.263 "data_size": 0 00:24:08.263 }, 00:24:08.263 { 00:24:08.263 "name": "BaseBdev3", 00:24:08.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.263 "is_configured": false, 00:24:08.263 "data_offset": 0, 00:24:08.263 "data_size": 0 00:24:08.263 }, 00:24:08.263 { 00:24:08.263 "name": "BaseBdev4", 00:24:08.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.263 "is_configured": false, 00:24:08.263 "data_offset": 0, 00:24:08.263 "data_size": 0 00:24:08.263 } 00:24:08.263 ] 00:24:08.263 }' 00:24:08.263 01:06:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:08.263 01:06:42 -- common/autotest_common.sh@10 -- # set +x 00:24:08.830 01:06:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:09.089 [2024-11-18 01:06:43.243600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:09.089 BaseBdev2 00:24:09.089 01:06:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:09.089 01:06:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:09.089 01:06:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:09.089 01:06:43 -- common/autotest_common.sh@899 -- # local i 00:24:09.089 01:06:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:09.089 01:06:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:09.089 01:06:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:09.348 01:06:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:09.348 [ 00:24:09.348 { 00:24:09.348 "name": "BaseBdev2", 00:24:09.348 "aliases": [ 00:24:09.348 "f6ecfe42-631c-48d7-8287-2e29e09dc89b" 00:24:09.348 ], 00:24:09.348 "product_name": "Malloc disk", 00:24:09.348 "block_size": 512, 00:24:09.348 "num_blocks": 65536, 00:24:09.348 "uuid": "f6ecfe42-631c-48d7-8287-2e29e09dc89b", 00:24:09.348 "assigned_rate_limits": { 00:24:09.348 "rw_ios_per_sec": 0, 00:24:09.348 "rw_mbytes_per_sec": 0, 00:24:09.348 "r_mbytes_per_sec": 0, 00:24:09.348 "w_mbytes_per_sec": 0 00:24:09.348 }, 00:24:09.348 "claimed": true, 00:24:09.348 "claim_type": "exclusive_write", 00:24:09.348 "zoned": false, 00:24:09.348 "supported_io_types": { 00:24:09.348 "read": true, 00:24:09.349 "write": true, 00:24:09.349 "unmap": true, 00:24:09.349 "write_zeroes": true, 00:24:09.349 "flush": true, 00:24:09.349 "reset": true, 00:24:09.349 "compare": false, 00:24:09.349 "compare_and_write": false, 00:24:09.349 "abort": true, 00:24:09.349 "nvme_admin": false, 00:24:09.349 "nvme_io": false 00:24:09.349 }, 00:24:09.349 "memory_domains": [ 00:24:09.349 { 00:24:09.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.349 "dma_device_type": 2 00:24:09.349 } 00:24:09.349 ], 00:24:09.349 "driver_specific": {} 00:24:09.349 } 00:24:09.349 ] 00:24:09.607 01:06:43 -- common/autotest_common.sh@905 -- # return 0 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.607 01:06:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:09.607 "name": "Existed_Raid", 00:24:09.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.607 "strip_size_kb": 64, 00:24:09.607 "state": "configuring", 00:24:09.607 "raid_level": "raid5f", 00:24:09.607 "superblock": false, 00:24:09.607 "num_base_bdevs": 4, 00:24:09.607 "num_base_bdevs_discovered": 2, 00:24:09.607 "num_base_bdevs_operational": 4, 00:24:09.607 "base_bdevs_list": [ 00:24:09.607 { 00:24:09.607 "name": "BaseBdev1", 00:24:09.607 "uuid": "2b5238b6-fc36-41c5-b27a-9d9e5759c7ed", 00:24:09.607 "is_configured": true, 00:24:09.607 "data_offset": 0, 00:24:09.607 "data_size": 65536 00:24:09.607 }, 00:24:09.607 { 00:24:09.607 "name": "BaseBdev2", 00:24:09.607 "uuid": "f6ecfe42-631c-48d7-8287-2e29e09dc89b", 00:24:09.607 "is_configured": true, 00:24:09.607 "data_offset": 0, 00:24:09.607 "data_size": 65536 00:24:09.607 }, 00:24:09.607 { 00:24:09.607 "name": "BaseBdev3", 00:24:09.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.607 "is_configured": false, 00:24:09.608 "data_offset": 0, 00:24:09.608 "data_size": 0 00:24:09.608 }, 00:24:09.608 { 00:24:09.608 "name": "BaseBdev4", 00:24:09.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.608 "is_configured": false, 00:24:09.608 "data_offset": 0, 00:24:09.608 "data_size": 0 00:24:09.608 } 00:24:09.608 ] 00:24:09.608 }' 00:24:09.608 01:06:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:09.608 01:06:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.175 01:06:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:10.434 [2024-11-18 01:06:44.691940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:10.434 BaseBdev3 00:24:10.434 01:06:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:10.434 01:06:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:10.434 01:06:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:10.434 01:06:44 -- common/autotest_common.sh@899 -- # local i 00:24:10.434 01:06:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:10.434 01:06:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:10.434 01:06:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:10.694 01:06:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:10.694 [ 00:24:10.694 { 00:24:10.694 "name": "BaseBdev3", 00:24:10.694 "aliases": [ 00:24:10.694 "745504ed-fe51-42be-b83c-620425b63ad7" 00:24:10.694 ], 00:24:10.694 "product_name": "Malloc disk", 00:24:10.694 "block_size": 512, 00:24:10.694 "num_blocks": 65536, 00:24:10.694 "uuid": "745504ed-fe51-42be-b83c-620425b63ad7", 00:24:10.694 "assigned_rate_limits": { 00:24:10.694 "rw_ios_per_sec": 0, 00:24:10.694 "rw_mbytes_per_sec": 0, 00:24:10.694 "r_mbytes_per_sec": 0, 00:24:10.694 "w_mbytes_per_sec": 0 00:24:10.694 }, 00:24:10.694 "claimed": true, 00:24:10.694 "claim_type": "exclusive_write", 00:24:10.694 "zoned": false, 00:24:10.694 "supported_io_types": { 00:24:10.694 "read": true, 00:24:10.694 "write": true, 00:24:10.694 "unmap": true, 00:24:10.694 "write_zeroes": true, 00:24:10.694 "flush": true, 00:24:10.694 "reset": true, 00:24:10.694 "compare": false, 00:24:10.694 "compare_and_write": false, 00:24:10.694 "abort": true, 00:24:10.694 "nvme_admin": false, 00:24:10.694 "nvme_io": false 00:24:10.694 }, 00:24:10.694 "memory_domains": [ 00:24:10.694 { 00:24:10.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.694 "dma_device_type": 2 00:24:10.694 } 00:24:10.694 ], 00:24:10.694 "driver_specific": {} 00:24:10.694 } 00:24:10.694 ] 00:24:10.694 01:06:45 -- common/autotest_common.sh@905 -- # return 0 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.694 01:06:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.263 01:06:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:11.263 "name": "Existed_Raid", 00:24:11.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.263 "strip_size_kb": 64, 00:24:11.263 "state": "configuring", 00:24:11.263 "raid_level": "raid5f", 00:24:11.263 "superblock": false, 00:24:11.263 "num_base_bdevs": 4, 00:24:11.263 "num_base_bdevs_discovered": 3, 00:24:11.263 "num_base_bdevs_operational": 4, 00:24:11.263 "base_bdevs_list": [ 00:24:11.263 { 00:24:11.263 "name": "BaseBdev1", 00:24:11.263 "uuid": "2b5238b6-fc36-41c5-b27a-9d9e5759c7ed", 00:24:11.263 "is_configured": true, 00:24:11.263 "data_offset": 0, 00:24:11.263 "data_size": 65536 00:24:11.263 }, 00:24:11.263 { 00:24:11.263 "name": "BaseBdev2", 00:24:11.263 "uuid": "f6ecfe42-631c-48d7-8287-2e29e09dc89b", 00:24:11.263 "is_configured": true, 00:24:11.263 "data_offset": 0, 00:24:11.263 "data_size": 65536 00:24:11.263 }, 00:24:11.263 { 00:24:11.263 "name": "BaseBdev3", 00:24:11.263 "uuid": "745504ed-fe51-42be-b83c-620425b63ad7", 00:24:11.263 "is_configured": true, 00:24:11.263 "data_offset": 0, 00:24:11.263 "data_size": 65536 00:24:11.263 }, 00:24:11.263 { 00:24:11.263 "name": "BaseBdev4", 00:24:11.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.263 "is_configured": false, 00:24:11.263 "data_offset": 0, 00:24:11.263 "data_size": 0 00:24:11.263 } 00:24:11.263 ] 00:24:11.263 }' 00:24:11.263 01:06:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:11.263 01:06:45 -- common/autotest_common.sh@10 -- # set +x 00:24:11.832 01:06:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:12.092 [2024-11-18 01:06:46.275664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:12.092 [2024-11-18 01:06:46.275988] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:24:12.092 [2024-11-18 01:06:46.276029] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:12.092 [2024-11-18 01:06:46.276380] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:24:12.092 [2024-11-18 01:06:46.277304] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:24:12.092 [2024-11-18 01:06:46.277422] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:24:12.092 [2024-11-18 01:06:46.277799] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.092 BaseBdev4 00:24:12.092 01:06:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:12.092 01:06:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:12.092 01:06:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:12.092 01:06:46 -- common/autotest_common.sh@899 -- # local i 00:24:12.092 01:06:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:12.092 01:06:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:12.092 01:06:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:12.350 01:06:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:12.609 [ 00:24:12.609 { 00:24:12.609 "name": "BaseBdev4", 00:24:12.609 "aliases": [ 00:24:12.609 "e4969493-702b-4f0f-a863-7643ee310858" 00:24:12.609 ], 00:24:12.609 "product_name": "Malloc disk", 00:24:12.609 "block_size": 512, 00:24:12.609 "num_blocks": 65536, 00:24:12.609 "uuid": "e4969493-702b-4f0f-a863-7643ee310858", 00:24:12.609 "assigned_rate_limits": { 00:24:12.609 "rw_ios_per_sec": 0, 00:24:12.609 "rw_mbytes_per_sec": 0, 00:24:12.609 "r_mbytes_per_sec": 0, 00:24:12.609 "w_mbytes_per_sec": 0 00:24:12.609 }, 00:24:12.609 "claimed": true, 00:24:12.609 "claim_type": "exclusive_write", 00:24:12.609 "zoned": false, 00:24:12.609 "supported_io_types": { 00:24:12.609 "read": true, 00:24:12.610 "write": true, 00:24:12.610 "unmap": true, 00:24:12.610 "write_zeroes": true, 00:24:12.610 "flush": true, 00:24:12.610 "reset": true, 00:24:12.610 "compare": false, 00:24:12.610 "compare_and_write": false, 00:24:12.610 "abort": true, 00:24:12.610 "nvme_admin": false, 00:24:12.610 "nvme_io": false 00:24:12.610 }, 00:24:12.610 "memory_domains": [ 00:24:12.610 { 00:24:12.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.610 "dma_device_type": 2 00:24:12.610 } 00:24:12.610 ], 00:24:12.610 "driver_specific": {} 00:24:12.610 } 00:24:12.610 ] 00:24:12.610 01:06:46 -- common/autotest_common.sh@905 -- # return 0 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.610 01:06:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.869 01:06:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.869 "name": "Existed_Raid", 00:24:12.869 "uuid": "0da77bcd-662f-4ce2-b150-06ff2fd1bb22", 00:24:12.869 "strip_size_kb": 64, 00:24:12.869 "state": "online", 00:24:12.869 "raid_level": "raid5f", 00:24:12.869 "superblock": false, 00:24:12.869 "num_base_bdevs": 4, 00:24:12.869 "num_base_bdevs_discovered": 4, 00:24:12.869 "num_base_bdevs_operational": 4, 00:24:12.869 "base_bdevs_list": [ 00:24:12.869 { 00:24:12.869 "name": "BaseBdev1", 00:24:12.869 "uuid": "2b5238b6-fc36-41c5-b27a-9d9e5759c7ed", 00:24:12.869 "is_configured": true, 00:24:12.869 "data_offset": 0, 00:24:12.869 "data_size": 65536 00:24:12.869 }, 00:24:12.869 { 00:24:12.869 "name": "BaseBdev2", 00:24:12.869 "uuid": "f6ecfe42-631c-48d7-8287-2e29e09dc89b", 00:24:12.869 "is_configured": true, 00:24:12.869 "data_offset": 0, 00:24:12.869 "data_size": 65536 00:24:12.869 }, 00:24:12.869 { 00:24:12.869 "name": "BaseBdev3", 00:24:12.869 "uuid": "745504ed-fe51-42be-b83c-620425b63ad7", 00:24:12.869 "is_configured": true, 00:24:12.869 "data_offset": 0, 00:24:12.869 "data_size": 65536 00:24:12.869 }, 00:24:12.869 { 00:24:12.869 "name": "BaseBdev4", 00:24:12.869 "uuid": "e4969493-702b-4f0f-a863-7643ee310858", 00:24:12.869 "is_configured": true, 00:24:12.869 "data_offset": 0, 00:24:12.869 "data_size": 65536 00:24:12.869 } 00:24:12.869 ] 00:24:12.869 }' 00:24:12.869 01:06:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.869 01:06:47 -- common/autotest_common.sh@10 -- # set +x 00:24:13.437 01:06:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:13.695 [2024-11-18 01:06:47.946850] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:13.695 01:06:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:13.696 01:06:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.696 01:06:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.696 01:06:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.696 01:06:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.696 01:06:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.696 01:06:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.955 01:06:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.955 "name": "Existed_Raid", 00:24:13.955 "uuid": "0da77bcd-662f-4ce2-b150-06ff2fd1bb22", 00:24:13.955 "strip_size_kb": 64, 00:24:13.955 "state": "online", 00:24:13.955 "raid_level": "raid5f", 00:24:13.955 "superblock": false, 00:24:13.955 "num_base_bdevs": 4, 00:24:13.955 "num_base_bdevs_discovered": 3, 00:24:13.955 "num_base_bdevs_operational": 3, 00:24:13.955 "base_bdevs_list": [ 00:24:13.955 { 00:24:13.955 "name": null, 00:24:13.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.955 "is_configured": false, 00:24:13.955 "data_offset": 0, 00:24:13.955 "data_size": 65536 00:24:13.955 }, 00:24:13.955 { 00:24:13.955 "name": "BaseBdev2", 00:24:13.955 "uuid": "f6ecfe42-631c-48d7-8287-2e29e09dc89b", 00:24:13.955 "is_configured": true, 00:24:13.955 "data_offset": 0, 00:24:13.955 "data_size": 65536 00:24:13.955 }, 00:24:13.955 { 00:24:13.955 "name": "BaseBdev3", 00:24:13.955 "uuid": "745504ed-fe51-42be-b83c-620425b63ad7", 00:24:13.955 "is_configured": true, 00:24:13.955 "data_offset": 0, 00:24:13.955 "data_size": 65536 00:24:13.955 }, 00:24:13.955 { 00:24:13.955 "name": "BaseBdev4", 00:24:13.955 "uuid": "e4969493-702b-4f0f-a863-7643ee310858", 00:24:13.955 "is_configured": true, 00:24:13.955 "data_offset": 0, 00:24:13.955 "data_size": 65536 00:24:13.955 } 00:24:13.955 ] 00:24:13.955 }' 00:24:13.955 01:06:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.955 01:06:48 -- common/autotest_common.sh@10 -- # set +x 00:24:14.524 01:06:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:14.524 01:06:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:14.524 01:06:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.524 01:06:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:14.782 01:06:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:14.782 01:06:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:14.782 01:06:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:15.041 [2024-11-18 01:06:49.291008] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:15.041 [2024-11-18 01:06:49.291358] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:15.041 [2024-11-18 01:06:49.291701] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:15.041 01:06:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:15.041 01:06:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:15.041 01:06:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.041 01:06:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:15.299 01:06:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:15.299 01:06:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:15.299 01:06:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:15.299 [2024-11-18 01:06:49.676653] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:15.558 01:06:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:15.558 01:06:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:15.558 01:06:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:15.558 01:06:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.558 01:06:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:15.558 01:06:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:15.558 01:06:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:15.816 [2024-11-18 01:06:50.069506] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:15.816 [2024-11-18 01:06:50.070017] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:24:15.816 01:06:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:15.816 01:06:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:15.816 01:06:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.816 01:06:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:16.075 01:06:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:16.075 01:06:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:16.075 01:06:50 -- bdev/bdev_raid.sh@287 -- # killprocess 139924 00:24:16.075 01:06:50 -- common/autotest_common.sh@936 -- # '[' -z 139924 ']' 00:24:16.075 01:06:50 -- common/autotest_common.sh@940 -- # kill -0 139924 00:24:16.075 01:06:50 -- common/autotest_common.sh@941 -- # uname 00:24:16.075 01:06:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.075 01:06:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139924 00:24:16.075 01:06:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:16.075 01:06:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:16.075 01:06:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139924' 00:24:16.075 killing process with pid 139924 00:24:16.075 01:06:50 -- common/autotest_common.sh@955 -- # kill 139924 00:24:16.075 [2024-11-18 01:06:50.336383] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.075 01:06:50 -- common/autotest_common.sh@960 -- # wait 139924 00:24:16.075 [2024-11-18 01:06:50.336829] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:16.334 01:06:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:16.334 00:24:16.334 real 0m12.957s 00:24:16.334 user 0m22.781s 00:24:16.334 sys 0m2.429s 00:24:16.334 ************************************ 00:24:16.334 END TEST raid5f_state_function_test 00:24:16.334 ************************************ 00:24:16.334 01:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.334 01:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:16.592 01:06:50 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:16.593 01:06:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:16.593 01:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.593 01:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 ************************************ 00:24:16.593 START TEST raid5f_state_function_test_sb 00:24:16.593 ************************************ 00:24:16.593 01:06:50 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=140351 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140351' 00:24:16.593 Process raid pid: 140351 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:16.593 01:06:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 140351 /var/tmp/spdk-raid.sock 00:24:16.593 01:06:50 -- common/autotest_common.sh@829 -- # '[' -z 140351 ']' 00:24:16.593 01:06:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:16.593 01:06:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.593 01:06:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:16.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:16.593 01:06:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.593 01:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 [2024-11-18 01:06:50.890696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:16.593 [2024-11-18 01:06:50.891112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.852 [2024-11-18 01:06:51.052126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.852 [2024-11-18 01:06:51.135343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.852 [2024-11-18 01:06:51.221726] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:17.788 01:06:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.788 01:06:51 -- common/autotest_common.sh@862 -- # return 0 00:24:17.788 01:06:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:17.788 [2024-11-18 01:06:52.004726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:17.788 [2024-11-18 01:06:52.004960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:17.788 [2024-11-18 01:06:52.005040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:17.788 [2024-11-18 01:06:52.005092] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:17.788 [2024-11-18 01:06:52.005160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:17.788 [2024-11-18 01:06:52.005231] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:17.788 [2024-11-18 01:06:52.005339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:17.788 [2024-11-18 01:06:52.005396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.788 01:06:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.059 01:06:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.059 "name": "Existed_Raid", 00:24:18.059 "uuid": "caedbb52-5fa9-466a-b289-ecbb4a0dee3b", 00:24:18.059 "strip_size_kb": 64, 00:24:18.059 "state": "configuring", 00:24:18.059 "raid_level": "raid5f", 00:24:18.059 "superblock": true, 00:24:18.059 "num_base_bdevs": 4, 00:24:18.059 "num_base_bdevs_discovered": 0, 00:24:18.059 "num_base_bdevs_operational": 4, 00:24:18.059 "base_bdevs_list": [ 00:24:18.059 { 00:24:18.059 "name": "BaseBdev1", 00:24:18.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.059 "is_configured": false, 00:24:18.059 "data_offset": 0, 00:24:18.059 "data_size": 0 00:24:18.059 }, 00:24:18.059 { 00:24:18.059 "name": "BaseBdev2", 00:24:18.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.059 "is_configured": false, 00:24:18.059 "data_offset": 0, 00:24:18.059 "data_size": 0 00:24:18.059 }, 00:24:18.059 { 00:24:18.059 "name": "BaseBdev3", 00:24:18.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.059 "is_configured": false, 00:24:18.059 "data_offset": 0, 00:24:18.059 "data_size": 0 00:24:18.059 }, 00:24:18.059 { 00:24:18.059 "name": "BaseBdev4", 00:24:18.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.059 "is_configured": false, 00:24:18.059 "data_offset": 0, 00:24:18.059 "data_size": 0 00:24:18.059 } 00:24:18.059 ] 00:24:18.059 }' 00:24:18.059 01:06:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.059 01:06:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.663 01:06:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:18.663 [2024-11-18 01:06:53.048800] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:18.663 [2024-11-18 01:06:53.049095] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:24:18.922 01:06:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:18.922 [2024-11-18 01:06:53.232856] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:18.922 [2024-11-18 01:06:53.233057] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:18.922 [2024-11-18 01:06:53.233139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:18.922 [2024-11-18 01:06:53.233200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:18.922 [2024-11-18 01:06:53.233274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:18.922 [2024-11-18 01:06:53.233320] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:18.922 [2024-11-18 01:06:53.233346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:18.922 [2024-11-18 01:06:53.233430] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:18.922 01:06:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:19.181 [2024-11-18 01:06:53.433014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.182 BaseBdev1 00:24:19.182 01:06:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:19.182 01:06:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:19.182 01:06:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:19.182 01:06:53 -- common/autotest_common.sh@899 -- # local i 00:24:19.182 01:06:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:19.182 01:06:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:19.182 01:06:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:19.439 01:06:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:19.698 [ 00:24:19.698 { 00:24:19.698 "name": "BaseBdev1", 00:24:19.698 "aliases": [ 00:24:19.698 "430bc97a-e989-4289-88f4-b70048794bfb" 00:24:19.698 ], 00:24:19.698 "product_name": "Malloc disk", 00:24:19.698 "block_size": 512, 00:24:19.698 "num_blocks": 65536, 00:24:19.698 "uuid": "430bc97a-e989-4289-88f4-b70048794bfb", 00:24:19.698 "assigned_rate_limits": { 00:24:19.698 "rw_ios_per_sec": 0, 00:24:19.698 "rw_mbytes_per_sec": 0, 00:24:19.698 "r_mbytes_per_sec": 0, 00:24:19.698 "w_mbytes_per_sec": 0 00:24:19.698 }, 00:24:19.698 "claimed": true, 00:24:19.698 "claim_type": "exclusive_write", 00:24:19.698 "zoned": false, 00:24:19.698 "supported_io_types": { 00:24:19.698 "read": true, 00:24:19.698 "write": true, 00:24:19.698 "unmap": true, 00:24:19.698 "write_zeroes": true, 00:24:19.698 "flush": true, 00:24:19.698 "reset": true, 00:24:19.698 "compare": false, 00:24:19.698 "compare_and_write": false, 00:24:19.698 "abort": true, 00:24:19.698 "nvme_admin": false, 00:24:19.698 "nvme_io": false 00:24:19.698 }, 00:24:19.698 "memory_domains": [ 00:24:19.698 { 00:24:19.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.698 "dma_device_type": 2 00:24:19.698 } 00:24:19.698 ], 00:24:19.698 "driver_specific": {} 00:24:19.698 } 00:24:19.698 ] 00:24:19.698 01:06:53 -- common/autotest_common.sh@905 -- # return 0 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.698 01:06:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.698 01:06:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.698 "name": "Existed_Raid", 00:24:19.698 "uuid": "885b52cd-42d4-4e7d-a25f-60589a0c6c8b", 00:24:19.698 "strip_size_kb": 64, 00:24:19.698 "state": "configuring", 00:24:19.698 "raid_level": "raid5f", 00:24:19.698 "superblock": true, 00:24:19.698 "num_base_bdevs": 4, 00:24:19.698 "num_base_bdevs_discovered": 1, 00:24:19.698 "num_base_bdevs_operational": 4, 00:24:19.698 "base_bdevs_list": [ 00:24:19.698 { 00:24:19.698 "name": "BaseBdev1", 00:24:19.698 "uuid": "430bc97a-e989-4289-88f4-b70048794bfb", 00:24:19.698 "is_configured": true, 00:24:19.698 "data_offset": 2048, 00:24:19.698 "data_size": 63488 00:24:19.698 }, 00:24:19.698 { 00:24:19.698 "name": "BaseBdev2", 00:24:19.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.698 "is_configured": false, 00:24:19.698 "data_offset": 0, 00:24:19.698 "data_size": 0 00:24:19.698 }, 00:24:19.698 { 00:24:19.698 "name": "BaseBdev3", 00:24:19.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.698 "is_configured": false, 00:24:19.698 "data_offset": 0, 00:24:19.698 "data_size": 0 00:24:19.698 }, 00:24:19.698 { 00:24:19.698 "name": "BaseBdev4", 00:24:19.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.698 "is_configured": false, 00:24:19.698 "data_offset": 0, 00:24:19.698 "data_size": 0 00:24:19.698 } 00:24:19.698 ] 00:24:19.698 }' 00:24:19.698 01:06:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.698 01:06:54 -- common/autotest_common.sh@10 -- # set +x 00:24:20.266 01:06:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:20.525 [2024-11-18 01:06:54.837298] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:20.525 [2024-11-18 01:06:54.837368] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:24:20.525 01:06:54 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:20.525 01:06:54 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:20.783 01:06:55 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:21.042 BaseBdev1 00:24:21.042 01:06:55 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:21.042 01:06:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:21.042 01:06:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:21.042 01:06:55 -- common/autotest_common.sh@899 -- # local i 00:24:21.042 01:06:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:21.042 01:06:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:21.042 01:06:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:21.300 01:06:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:21.300 [ 00:24:21.300 { 00:24:21.300 "name": "BaseBdev1", 00:24:21.300 "aliases": [ 00:24:21.300 "38a04fa3-37fb-4548-8ce3-c3951e7f9d97" 00:24:21.300 ], 00:24:21.300 "product_name": "Malloc disk", 00:24:21.300 "block_size": 512, 00:24:21.300 "num_blocks": 65536, 00:24:21.300 "uuid": "38a04fa3-37fb-4548-8ce3-c3951e7f9d97", 00:24:21.300 "assigned_rate_limits": { 00:24:21.300 "rw_ios_per_sec": 0, 00:24:21.300 "rw_mbytes_per_sec": 0, 00:24:21.300 "r_mbytes_per_sec": 0, 00:24:21.300 "w_mbytes_per_sec": 0 00:24:21.300 }, 00:24:21.300 "claimed": false, 00:24:21.300 "zoned": false, 00:24:21.300 "supported_io_types": { 00:24:21.300 "read": true, 00:24:21.300 "write": true, 00:24:21.300 "unmap": true, 00:24:21.300 "write_zeroes": true, 00:24:21.300 "flush": true, 00:24:21.300 "reset": true, 00:24:21.300 "compare": false, 00:24:21.300 "compare_and_write": false, 00:24:21.300 "abort": true, 00:24:21.300 "nvme_admin": false, 00:24:21.300 "nvme_io": false 00:24:21.300 }, 00:24:21.300 "memory_domains": [ 00:24:21.300 { 00:24:21.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.300 "dma_device_type": 2 00:24:21.300 } 00:24:21.300 ], 00:24:21.300 "driver_specific": {} 00:24:21.300 } 00:24:21.300 ] 00:24:21.300 01:06:55 -- common/autotest_common.sh@905 -- # return 0 00:24:21.300 01:06:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:21.558 [2024-11-18 01:06:55.860805] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.558 [2024-11-18 01:06:55.863246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.558 [2024-11-18 01:06:55.863325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.558 [2024-11-18 01:06:55.863335] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:21.558 [2024-11-18 01:06:55.863375] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:21.558 [2024-11-18 01:06:55.863383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:21.558 [2024-11-18 01:06:55.863401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.558 01:06:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.817 01:06:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.817 "name": "Existed_Raid", 00:24:21.817 "uuid": "7f10ff26-3745-4693-a41f-25f804c9f5d2", 00:24:21.817 "strip_size_kb": 64, 00:24:21.817 "state": "configuring", 00:24:21.817 "raid_level": "raid5f", 00:24:21.817 "superblock": true, 00:24:21.817 "num_base_bdevs": 4, 00:24:21.817 "num_base_bdevs_discovered": 1, 00:24:21.817 "num_base_bdevs_operational": 4, 00:24:21.817 "base_bdevs_list": [ 00:24:21.817 { 00:24:21.817 "name": "BaseBdev1", 00:24:21.817 "uuid": "38a04fa3-37fb-4548-8ce3-c3951e7f9d97", 00:24:21.817 "is_configured": true, 00:24:21.817 "data_offset": 2048, 00:24:21.817 "data_size": 63488 00:24:21.817 }, 00:24:21.817 { 00:24:21.817 "name": "BaseBdev2", 00:24:21.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.817 "is_configured": false, 00:24:21.817 "data_offset": 0, 00:24:21.817 "data_size": 0 00:24:21.817 }, 00:24:21.817 { 00:24:21.817 "name": "BaseBdev3", 00:24:21.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.817 "is_configured": false, 00:24:21.817 "data_offset": 0, 00:24:21.817 "data_size": 0 00:24:21.817 }, 00:24:21.817 { 00:24:21.817 "name": "BaseBdev4", 00:24:21.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.817 "is_configured": false, 00:24:21.817 "data_offset": 0, 00:24:21.817 "data_size": 0 00:24:21.817 } 00:24:21.817 ] 00:24:21.817 }' 00:24:21.817 01:06:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.817 01:06:56 -- common/autotest_common.sh@10 -- # set +x 00:24:22.386 01:06:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:22.645 [2024-11-18 01:06:56.928314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:22.645 BaseBdev2 00:24:22.645 01:06:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:22.645 01:06:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:22.645 01:06:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:22.645 01:06:56 -- common/autotest_common.sh@899 -- # local i 00:24:22.645 01:06:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:22.645 01:06:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:22.645 01:06:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:22.904 01:06:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:23.163 [ 00:24:23.163 { 00:24:23.163 "name": "BaseBdev2", 00:24:23.163 "aliases": [ 00:24:23.163 "8a2f02a7-338d-40a0-a2ab-2d2c58775cb5" 00:24:23.163 ], 00:24:23.163 "product_name": "Malloc disk", 00:24:23.163 "block_size": 512, 00:24:23.163 "num_blocks": 65536, 00:24:23.163 "uuid": "8a2f02a7-338d-40a0-a2ab-2d2c58775cb5", 00:24:23.163 "assigned_rate_limits": { 00:24:23.163 "rw_ios_per_sec": 0, 00:24:23.163 "rw_mbytes_per_sec": 0, 00:24:23.163 "r_mbytes_per_sec": 0, 00:24:23.163 "w_mbytes_per_sec": 0 00:24:23.163 }, 00:24:23.163 "claimed": true, 00:24:23.163 "claim_type": "exclusive_write", 00:24:23.163 "zoned": false, 00:24:23.163 "supported_io_types": { 00:24:23.163 "read": true, 00:24:23.163 "write": true, 00:24:23.163 "unmap": true, 00:24:23.163 "write_zeroes": true, 00:24:23.163 "flush": true, 00:24:23.163 "reset": true, 00:24:23.163 "compare": false, 00:24:23.163 "compare_and_write": false, 00:24:23.163 "abort": true, 00:24:23.163 "nvme_admin": false, 00:24:23.163 "nvme_io": false 00:24:23.163 }, 00:24:23.163 "memory_domains": [ 00:24:23.163 { 00:24:23.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.163 "dma_device_type": 2 00:24:23.163 } 00:24:23.163 ], 00:24:23.163 "driver_specific": {} 00:24:23.163 } 00:24:23.163 ] 00:24:23.163 01:06:57 -- common/autotest_common.sh@905 -- # return 0 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.163 01:06:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.164 01:06:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.164 01:06:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.423 01:06:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.423 "name": "Existed_Raid", 00:24:23.423 "uuid": "7f10ff26-3745-4693-a41f-25f804c9f5d2", 00:24:23.423 "strip_size_kb": 64, 00:24:23.423 "state": "configuring", 00:24:23.423 "raid_level": "raid5f", 00:24:23.423 "superblock": true, 00:24:23.423 "num_base_bdevs": 4, 00:24:23.423 "num_base_bdevs_discovered": 2, 00:24:23.423 "num_base_bdevs_operational": 4, 00:24:23.423 "base_bdevs_list": [ 00:24:23.423 { 00:24:23.423 "name": "BaseBdev1", 00:24:23.423 "uuid": "38a04fa3-37fb-4548-8ce3-c3951e7f9d97", 00:24:23.423 "is_configured": true, 00:24:23.423 "data_offset": 2048, 00:24:23.423 "data_size": 63488 00:24:23.423 }, 00:24:23.423 { 00:24:23.423 "name": "BaseBdev2", 00:24:23.423 "uuid": "8a2f02a7-338d-40a0-a2ab-2d2c58775cb5", 00:24:23.423 "is_configured": true, 00:24:23.423 "data_offset": 2048, 00:24:23.423 "data_size": 63488 00:24:23.423 }, 00:24:23.423 { 00:24:23.423 "name": "BaseBdev3", 00:24:23.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.423 "is_configured": false, 00:24:23.423 "data_offset": 0, 00:24:23.423 "data_size": 0 00:24:23.423 }, 00:24:23.423 { 00:24:23.423 "name": "BaseBdev4", 00:24:23.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.423 "is_configured": false, 00:24:23.423 "data_offset": 0, 00:24:23.423 "data_size": 0 00:24:23.423 } 00:24:23.423 ] 00:24:23.423 }' 00:24:23.423 01:06:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.423 01:06:57 -- common/autotest_common.sh@10 -- # set +x 00:24:23.991 01:06:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:23.991 [2024-11-18 01:06:58.293752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:23.991 BaseBdev3 00:24:23.991 01:06:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:23.991 01:06:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:23.991 01:06:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:23.991 01:06:58 -- common/autotest_common.sh@899 -- # local i 00:24:23.991 01:06:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:23.991 01:06:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:23.991 01:06:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:24.250 01:06:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:24.509 [ 00:24:24.509 { 00:24:24.509 "name": "BaseBdev3", 00:24:24.509 "aliases": [ 00:24:24.509 "7f003d94-d1b2-41e7-9fb2-7ea798be252f" 00:24:24.509 ], 00:24:24.509 "product_name": "Malloc disk", 00:24:24.509 "block_size": 512, 00:24:24.509 "num_blocks": 65536, 00:24:24.509 "uuid": "7f003d94-d1b2-41e7-9fb2-7ea798be252f", 00:24:24.509 "assigned_rate_limits": { 00:24:24.509 "rw_ios_per_sec": 0, 00:24:24.509 "rw_mbytes_per_sec": 0, 00:24:24.509 "r_mbytes_per_sec": 0, 00:24:24.509 "w_mbytes_per_sec": 0 00:24:24.509 }, 00:24:24.509 "claimed": true, 00:24:24.509 "claim_type": "exclusive_write", 00:24:24.509 "zoned": false, 00:24:24.509 "supported_io_types": { 00:24:24.509 "read": true, 00:24:24.509 "write": true, 00:24:24.509 "unmap": true, 00:24:24.509 "write_zeroes": true, 00:24:24.509 "flush": true, 00:24:24.509 "reset": true, 00:24:24.509 "compare": false, 00:24:24.509 "compare_and_write": false, 00:24:24.509 "abort": true, 00:24:24.509 "nvme_admin": false, 00:24:24.509 "nvme_io": false 00:24:24.509 }, 00:24:24.509 "memory_domains": [ 00:24:24.509 { 00:24:24.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.509 "dma_device_type": 2 00:24:24.509 } 00:24:24.509 ], 00:24:24.509 "driver_specific": {} 00:24:24.509 } 00:24:24.509 ] 00:24:24.509 01:06:58 -- common/autotest_common.sh@905 -- # return 0 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.509 01:06:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.768 01:06:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.768 "name": "Existed_Raid", 00:24:24.768 "uuid": "7f10ff26-3745-4693-a41f-25f804c9f5d2", 00:24:24.768 "strip_size_kb": 64, 00:24:24.768 "state": "configuring", 00:24:24.768 "raid_level": "raid5f", 00:24:24.768 "superblock": true, 00:24:24.768 "num_base_bdevs": 4, 00:24:24.768 "num_base_bdevs_discovered": 3, 00:24:24.768 "num_base_bdevs_operational": 4, 00:24:24.768 "base_bdevs_list": [ 00:24:24.768 { 00:24:24.768 "name": "BaseBdev1", 00:24:24.768 "uuid": "38a04fa3-37fb-4548-8ce3-c3951e7f9d97", 00:24:24.768 "is_configured": true, 00:24:24.768 "data_offset": 2048, 00:24:24.768 "data_size": 63488 00:24:24.768 }, 00:24:24.768 { 00:24:24.768 "name": "BaseBdev2", 00:24:24.768 "uuid": "8a2f02a7-338d-40a0-a2ab-2d2c58775cb5", 00:24:24.768 "is_configured": true, 00:24:24.768 "data_offset": 2048, 00:24:24.768 "data_size": 63488 00:24:24.768 }, 00:24:24.768 { 00:24:24.768 "name": "BaseBdev3", 00:24:24.768 "uuid": "7f003d94-d1b2-41e7-9fb2-7ea798be252f", 00:24:24.768 "is_configured": true, 00:24:24.768 "data_offset": 2048, 00:24:24.768 "data_size": 63488 00:24:24.768 }, 00:24:24.768 { 00:24:24.768 "name": "BaseBdev4", 00:24:24.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.768 "is_configured": false, 00:24:24.768 "data_offset": 0, 00:24:24.768 "data_size": 0 00:24:24.768 } 00:24:24.768 ] 00:24:24.768 }' 00:24:24.768 01:06:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.768 01:06:59 -- common/autotest_common.sh@10 -- # set +x 00:24:25.344 01:06:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:25.603 [2024-11-18 01:06:59.760111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:25.603 [2024-11-18 01:06:59.760412] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:24:25.603 [2024-11-18 01:06:59.760426] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:25.603 [2024-11-18 01:06:59.760590] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:24:25.603 [2024-11-18 01:06:59.761447] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:24:25.603 [2024-11-18 01:06:59.761469] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:24:25.603 [2024-11-18 01:06:59.761645] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.603 BaseBdev4 00:24:25.603 01:06:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:25.603 01:06:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:25.603 01:06:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:25.603 01:06:59 -- common/autotest_common.sh@899 -- # local i 00:24:25.603 01:06:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:25.603 01:06:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:25.603 01:06:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:25.603 01:06:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:25.862 [ 00:24:25.862 { 00:24:25.862 "name": "BaseBdev4", 00:24:25.862 "aliases": [ 00:24:25.862 "b6d1859d-a1eb-4c6a-89a1-e3e10600e303" 00:24:25.862 ], 00:24:25.862 "product_name": "Malloc disk", 00:24:25.862 "block_size": 512, 00:24:25.862 "num_blocks": 65536, 00:24:25.863 "uuid": "b6d1859d-a1eb-4c6a-89a1-e3e10600e303", 00:24:25.863 "assigned_rate_limits": { 00:24:25.863 "rw_ios_per_sec": 0, 00:24:25.863 "rw_mbytes_per_sec": 0, 00:24:25.863 "r_mbytes_per_sec": 0, 00:24:25.863 "w_mbytes_per_sec": 0 00:24:25.863 }, 00:24:25.863 "claimed": true, 00:24:25.863 "claim_type": "exclusive_write", 00:24:25.863 "zoned": false, 00:24:25.863 "supported_io_types": { 00:24:25.863 "read": true, 00:24:25.863 "write": true, 00:24:25.863 "unmap": true, 00:24:25.863 "write_zeroes": true, 00:24:25.863 "flush": true, 00:24:25.863 "reset": true, 00:24:25.863 "compare": false, 00:24:25.863 "compare_and_write": false, 00:24:25.863 "abort": true, 00:24:25.863 "nvme_admin": false, 00:24:25.863 "nvme_io": false 00:24:25.863 }, 00:24:25.863 "memory_domains": [ 00:24:25.863 { 00:24:25.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.863 "dma_device_type": 2 00:24:25.863 } 00:24:25.863 ], 00:24:25.863 "driver_specific": {} 00:24:25.863 } 00:24:25.863 ] 00:24:25.863 01:07:00 -- common/autotest_common.sh@905 -- # return 0 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.863 01:07:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.122 01:07:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.122 "name": "Existed_Raid", 00:24:26.122 "uuid": "7f10ff26-3745-4693-a41f-25f804c9f5d2", 00:24:26.122 "strip_size_kb": 64, 00:24:26.122 "state": "online", 00:24:26.122 "raid_level": "raid5f", 00:24:26.122 "superblock": true, 00:24:26.122 "num_base_bdevs": 4, 00:24:26.122 "num_base_bdevs_discovered": 4, 00:24:26.122 "num_base_bdevs_operational": 4, 00:24:26.122 "base_bdevs_list": [ 00:24:26.122 { 00:24:26.122 "name": "BaseBdev1", 00:24:26.122 "uuid": "38a04fa3-37fb-4548-8ce3-c3951e7f9d97", 00:24:26.122 "is_configured": true, 00:24:26.122 "data_offset": 2048, 00:24:26.122 "data_size": 63488 00:24:26.122 }, 00:24:26.122 { 00:24:26.122 "name": "BaseBdev2", 00:24:26.122 "uuid": "8a2f02a7-338d-40a0-a2ab-2d2c58775cb5", 00:24:26.122 "is_configured": true, 00:24:26.122 "data_offset": 2048, 00:24:26.122 "data_size": 63488 00:24:26.122 }, 00:24:26.122 { 00:24:26.122 "name": "BaseBdev3", 00:24:26.122 "uuid": "7f003d94-d1b2-41e7-9fb2-7ea798be252f", 00:24:26.122 "is_configured": true, 00:24:26.122 "data_offset": 2048, 00:24:26.122 "data_size": 63488 00:24:26.122 }, 00:24:26.122 { 00:24:26.122 "name": "BaseBdev4", 00:24:26.122 "uuid": "b6d1859d-a1eb-4c6a-89a1-e3e10600e303", 00:24:26.122 "is_configured": true, 00:24:26.122 "data_offset": 2048, 00:24:26.122 "data_size": 63488 00:24:26.122 } 00:24:26.122 ] 00:24:26.122 }' 00:24:26.122 01:07:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.122 01:07:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.690 01:07:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:26.949 [2024-11-18 01:07:01.138758] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.949 01:07:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.209 01:07:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:27.209 "name": "Existed_Raid", 00:24:27.209 "uuid": "7f10ff26-3745-4693-a41f-25f804c9f5d2", 00:24:27.209 "strip_size_kb": 64, 00:24:27.209 "state": "online", 00:24:27.209 "raid_level": "raid5f", 00:24:27.209 "superblock": true, 00:24:27.209 "num_base_bdevs": 4, 00:24:27.209 "num_base_bdevs_discovered": 3, 00:24:27.209 "num_base_bdevs_operational": 3, 00:24:27.209 "base_bdevs_list": [ 00:24:27.209 { 00:24:27.209 "name": null, 00:24:27.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.209 "is_configured": false, 00:24:27.209 "data_offset": 2048, 00:24:27.209 "data_size": 63488 00:24:27.209 }, 00:24:27.209 { 00:24:27.209 "name": "BaseBdev2", 00:24:27.209 "uuid": "8a2f02a7-338d-40a0-a2ab-2d2c58775cb5", 00:24:27.209 "is_configured": true, 00:24:27.209 "data_offset": 2048, 00:24:27.209 "data_size": 63488 00:24:27.209 }, 00:24:27.209 { 00:24:27.209 "name": "BaseBdev3", 00:24:27.209 "uuid": "7f003d94-d1b2-41e7-9fb2-7ea798be252f", 00:24:27.209 "is_configured": true, 00:24:27.209 "data_offset": 2048, 00:24:27.209 "data_size": 63488 00:24:27.209 }, 00:24:27.209 { 00:24:27.209 "name": "BaseBdev4", 00:24:27.209 "uuid": "b6d1859d-a1eb-4c6a-89a1-e3e10600e303", 00:24:27.209 "is_configured": true, 00:24:27.209 "data_offset": 2048, 00:24:27.209 "data_size": 63488 00:24:27.209 } 00:24:27.209 ] 00:24:27.209 }' 00:24:27.209 01:07:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:27.209 01:07:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.777 01:07:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:27.777 01:07:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:27.777 01:07:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.777 01:07:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:27.777 01:07:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:27.778 01:07:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:27.778 01:07:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:28.036 [2024-11-18 01:07:02.382652] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:28.036 [2024-11-18 01:07:02.382704] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:28.036 [2024-11-18 01:07:02.382802] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.036 01:07:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:28.036 01:07:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:28.036 01:07:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.036 01:07:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:28.295 01:07:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:28.295 01:07:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:28.295 01:07:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:28.554 [2024-11-18 01:07:02.815425] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:28.554 01:07:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:28.554 01:07:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:28.554 01:07:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.554 01:07:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:28.812 01:07:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:28.812 01:07:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:28.812 01:07:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:29.075 [2024-11-18 01:07:03.344221] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:29.075 [2024-11-18 01:07:03.344296] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:24:29.075 01:07:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:29.075 01:07:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:29.075 01:07:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.075 01:07:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:29.334 01:07:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:29.334 01:07:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:29.334 01:07:03 -- bdev/bdev_raid.sh@287 -- # killprocess 140351 00:24:29.334 01:07:03 -- common/autotest_common.sh@936 -- # '[' -z 140351 ']' 00:24:29.334 01:07:03 -- common/autotest_common.sh@940 -- # kill -0 140351 00:24:29.334 01:07:03 -- common/autotest_common.sh@941 -- # uname 00:24:29.334 01:07:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:29.334 01:07:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140351 00:24:29.334 01:07:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:29.334 01:07:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:29.334 01:07:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140351' 00:24:29.334 killing process with pid 140351 00:24:29.334 01:07:03 -- common/autotest_common.sh@955 -- # kill 140351 00:24:29.334 [2024-11-18 01:07:03.592554] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:29.334 [2024-11-18 01:07:03.592651] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:29.334 01:07:03 -- common/autotest_common.sh@960 -- # wait 140351 00:24:29.593 ************************************ 00:24:29.593 END TEST raid5f_state_function_test_sb 00:24:29.593 ************************************ 00:24:29.593 01:07:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:29.593 00:24:29.593 real 0m13.109s 00:24:29.593 user 0m23.219s 00:24:29.593 sys 0m2.368s 00:24:29.593 01:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:29.593 01:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.593 01:07:03 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:29.593 01:07:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:24:29.593 01:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:29.593 01:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.593 ************************************ 00:24:29.593 START TEST raid5f_superblock_test 00:24:29.593 ************************************ 00:24:29.593 01:07:03 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4 00:24:29.593 01:07:03 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@357 -- # raid_pid=140776 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@358 -- # waitforlisten 140776 /var/tmp/spdk-raid.sock 00:24:29.853 01:07:03 -- common/autotest_common.sh@829 -- # '[' -z 140776 ']' 00:24:29.853 01:07:03 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:29.853 01:07:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:29.853 01:07:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.853 01:07:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:29.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:29.853 01:07:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.853 01:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.853 [2024-11-18 01:07:04.067057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:29.853 [2024-11-18 01:07:04.067340] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140776 ] 00:24:29.853 [2024-11-18 01:07:04.225351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.112 [2024-11-18 01:07:04.309522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.112 [2024-11-18 01:07:04.393659] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.681 01:07:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.681 01:07:04 -- common/autotest_common.sh@862 -- # return 0 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.681 01:07:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:30.681 malloc1 00:24:30.681 01:07:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:30.940 [2024-11-18 01:07:05.238628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:30.940 [2024-11-18 01:07:05.238762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.940 [2024-11-18 01:07:05.238803] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:24:30.940 [2024-11-18 01:07:05.238863] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.940 [2024-11-18 01:07:05.241742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.940 [2024-11-18 01:07:05.241811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:30.940 pt1 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.940 01:07:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:31.199 malloc2 00:24:31.199 01:07:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:31.459 [2024-11-18 01:07:05.694424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:31.459 [2024-11-18 01:07:05.694527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.459 [2024-11-18 01:07:05.694568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:31.459 [2024-11-18 01:07:05.694617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.459 [2024-11-18 01:07:05.697317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.459 [2024-11-18 01:07:05.697371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:31.459 pt2 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:31.459 01:07:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:31.718 malloc3 00:24:31.718 01:07:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:31.718 [2024-11-18 01:07:06.074346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:31.718 [2024-11-18 01:07:06.074454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.718 [2024-11-18 01:07:06.074498] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:31.718 [2024-11-18 01:07:06.074544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.718 [2024-11-18 01:07:06.077273] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.718 [2024-11-18 01:07:06.077365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:31.718 pt3 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:31.718 01:07:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:31.978 malloc4 00:24:31.978 01:07:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:32.238 [2024-11-18 01:07:06.449769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:32.238 [2024-11-18 01:07:06.449919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.238 [2024-11-18 01:07:06.449959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:32.238 [2024-11-18 01:07:06.450011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.238 [2024-11-18 01:07:06.452765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.238 [2024-11-18 01:07:06.452822] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:32.238 pt4 00:24:32.238 01:07:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:32.238 01:07:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:32.238 01:07:06 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:32.238 [2024-11-18 01:07:06.633913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:32.238 [2024-11-18 01:07:06.636373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.238 [2024-11-18 01:07:06.636438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:32.238 [2024-11-18 01:07:06.636477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:32.238 [2024-11-18 01:07:06.636690] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:24:32.238 [2024-11-18 01:07:06.636700] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:32.238 [2024-11-18 01:07:06.636849] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:32.238 [2024-11-18 01:07:06.637693] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:24:32.238 [2024-11-18 01:07:06.637714] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:24:32.238 [2024-11-18 01:07:06.637907] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.497 01:07:06 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.498 "name": "raid_bdev1", 00:24:32.498 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:32.498 "strip_size_kb": 64, 00:24:32.498 "state": "online", 00:24:32.498 "raid_level": "raid5f", 00:24:32.498 "superblock": true, 00:24:32.498 "num_base_bdevs": 4, 00:24:32.498 "num_base_bdevs_discovered": 4, 00:24:32.498 "num_base_bdevs_operational": 4, 00:24:32.498 "base_bdevs_list": [ 00:24:32.498 { 00:24:32.498 "name": "pt1", 00:24:32.498 "uuid": "3595ea87-6876-5c90-bd85-213bd9112330", 00:24:32.498 "is_configured": true, 00:24:32.498 "data_offset": 2048, 00:24:32.498 "data_size": 63488 00:24:32.498 }, 00:24:32.498 { 00:24:32.498 "name": "pt2", 00:24:32.498 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:32.498 "is_configured": true, 00:24:32.498 "data_offset": 2048, 00:24:32.498 "data_size": 63488 00:24:32.498 }, 00:24:32.498 { 00:24:32.498 "name": "pt3", 00:24:32.498 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:32.498 "is_configured": true, 00:24:32.498 "data_offset": 2048, 00:24:32.498 "data_size": 63488 00:24:32.498 }, 00:24:32.498 { 00:24:32.498 "name": "pt4", 00:24:32.498 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:32.498 "is_configured": true, 00:24:32.498 "data_offset": 2048, 00:24:32.498 "data_size": 63488 00:24:32.498 } 00:24:32.498 ] 00:24:32.498 }' 00:24:32.498 01:07:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.498 01:07:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.434 01:07:07 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:33.434 01:07:07 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:33.434 [2024-11-18 01:07:07.674236] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.434 01:07:07 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b41828f7-e6de-4c8f-804f-6210ab14ce67 00:24:33.434 01:07:07 -- bdev/bdev_raid.sh@380 -- # '[' -z b41828f7-e6de-4c8f-804f-6210ab14ce67 ']' 00:24:33.434 01:07:07 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:33.694 [2024-11-18 01:07:07.862086] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.694 [2024-11-18 01:07:07.862112] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.694 [2024-11-18 01:07:07.862238] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.694 [2024-11-18 01:07:07.862358] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.694 [2024-11-18 01:07:07.862369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:24:33.694 01:07:07 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.694 01:07:07 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:33.694 01:07:08 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:33.694 01:07:08 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:33.694 01:07:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.694 01:07:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:33.953 01:07:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.953 01:07:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:34.213 01:07:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:34.213 01:07:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:34.472 01:07:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:34.472 01:07:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:34.731 01:07:08 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:34.731 01:07:08 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:34.731 01:07:09 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:34.731 01:07:09 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:34.731 01:07:09 -- common/autotest_common.sh@650 -- # local es=0 00:24:34.731 01:07:09 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:34.731 01:07:09 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:34.731 01:07:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.731 01:07:09 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:34.731 01:07:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.731 01:07:09 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:34.731 01:07:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.731 01:07:09 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:34.731 01:07:09 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:34.731 01:07:09 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:34.995 [2024-11-18 01:07:09.266354] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:34.995 [2024-11-18 01:07:09.268824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:34.995 [2024-11-18 01:07:09.268871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:34.995 [2024-11-18 01:07:09.268899] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:34.996 [2024-11-18 01:07:09.268947] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:34.996 [2024-11-18 01:07:09.269057] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:34.996 [2024-11-18 01:07:09.269087] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:34.996 [2024-11-18 01:07:09.269136] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:24:34.996 [2024-11-18 01:07:09.269178] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:34.996 [2024-11-18 01:07:09.269188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:24:34.996 request: 00:24:34.996 { 00:24:34.996 "name": "raid_bdev1", 00:24:34.996 "raid_level": "raid5f", 00:24:34.996 "base_bdevs": [ 00:24:34.996 "malloc1", 00:24:34.996 "malloc2", 00:24:34.996 "malloc3", 00:24:34.996 "malloc4" 00:24:34.996 ], 00:24:34.996 "superblock": false, 00:24:34.996 "strip_size_kb": 64, 00:24:34.996 "method": "bdev_raid_create", 00:24:34.996 "req_id": 1 00:24:34.996 } 00:24:34.996 Got JSON-RPC error response 00:24:34.996 response: 00:24:34.996 { 00:24:34.996 "code": -17, 00:24:34.996 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:34.996 } 00:24:34.996 01:07:09 -- common/autotest_common.sh@653 -- # es=1 00:24:34.996 01:07:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.996 01:07:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.996 01:07:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.996 01:07:09 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.996 01:07:09 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:35.265 [2024-11-18 01:07:09.642809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:35.265 [2024-11-18 01:07:09.642932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.265 [2024-11-18 01:07:09.642974] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:35.265 [2024-11-18 01:07:09.643004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.265 [2024-11-18 01:07:09.645738] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.265 [2024-11-18 01:07:09.645827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:35.265 [2024-11-18 01:07:09.645925] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:35.265 [2024-11-18 01:07:09.646000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:35.265 pt1 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.265 01:07:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.523 01:07:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.523 01:07:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.523 01:07:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.523 "name": "raid_bdev1", 00:24:35.523 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:35.523 "strip_size_kb": 64, 00:24:35.523 "state": "configuring", 00:24:35.523 "raid_level": "raid5f", 00:24:35.523 "superblock": true, 00:24:35.523 "num_base_bdevs": 4, 00:24:35.523 "num_base_bdevs_discovered": 1, 00:24:35.523 "num_base_bdevs_operational": 4, 00:24:35.523 "base_bdevs_list": [ 00:24:35.523 { 00:24:35.523 "name": "pt1", 00:24:35.523 "uuid": "3595ea87-6876-5c90-bd85-213bd9112330", 00:24:35.523 "is_configured": true, 00:24:35.523 "data_offset": 2048, 00:24:35.523 "data_size": 63488 00:24:35.523 }, 00:24:35.523 { 00:24:35.523 "name": null, 00:24:35.523 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:35.523 "is_configured": false, 00:24:35.523 "data_offset": 2048, 00:24:35.523 "data_size": 63488 00:24:35.523 }, 00:24:35.523 { 00:24:35.523 "name": null, 00:24:35.523 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:35.523 "is_configured": false, 00:24:35.523 "data_offset": 2048, 00:24:35.523 "data_size": 63488 00:24:35.523 }, 00:24:35.523 { 00:24:35.523 "name": null, 00:24:35.523 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:35.523 "is_configured": false, 00:24:35.523 "data_offset": 2048, 00:24:35.523 "data_size": 63488 00:24:35.523 } 00:24:35.523 ] 00:24:35.523 }' 00:24:35.523 01:07:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.523 01:07:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.090 01:07:10 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:24:36.090 01:07:10 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:36.350 [2024-11-18 01:07:10.707045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:36.350 [2024-11-18 01:07:10.707170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.350 [2024-11-18 01:07:10.707215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:36.350 [2024-11-18 01:07:10.707238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.350 [2024-11-18 01:07:10.707713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.350 [2024-11-18 01:07:10.707754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:36.350 [2024-11-18 01:07:10.707845] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:36.350 [2024-11-18 01:07:10.707866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:36.350 pt2 00:24:36.350 01:07:10 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:36.609 [2024-11-18 01:07:10.935096] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:36.609 01:07:10 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:36.609 01:07:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:36.609 01:07:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:36.609 01:07:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.609 01:07:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.609 01:07:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:36.610 01:07:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.610 01:07:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.610 01:07:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.610 01:07:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.610 01:07:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.610 01:07:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.869 01:07:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.869 "name": "raid_bdev1", 00:24:36.869 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:36.869 "strip_size_kb": 64, 00:24:36.869 "state": "configuring", 00:24:36.869 "raid_level": "raid5f", 00:24:36.869 "superblock": true, 00:24:36.869 "num_base_bdevs": 4, 00:24:36.869 "num_base_bdevs_discovered": 1, 00:24:36.869 "num_base_bdevs_operational": 4, 00:24:36.869 "base_bdevs_list": [ 00:24:36.869 { 00:24:36.869 "name": "pt1", 00:24:36.869 "uuid": "3595ea87-6876-5c90-bd85-213bd9112330", 00:24:36.869 "is_configured": true, 00:24:36.869 "data_offset": 2048, 00:24:36.869 "data_size": 63488 00:24:36.869 }, 00:24:36.869 { 00:24:36.869 "name": null, 00:24:36.869 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:36.869 "is_configured": false, 00:24:36.869 "data_offset": 2048, 00:24:36.869 "data_size": 63488 00:24:36.869 }, 00:24:36.869 { 00:24:36.869 "name": null, 00:24:36.869 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:36.869 "is_configured": false, 00:24:36.869 "data_offset": 2048, 00:24:36.869 "data_size": 63488 00:24:36.869 }, 00:24:36.869 { 00:24:36.869 "name": null, 00:24:36.869 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:36.869 "is_configured": false, 00:24:36.869 "data_offset": 2048, 00:24:36.869 "data_size": 63488 00:24:36.869 } 00:24:36.869 ] 00:24:36.869 }' 00:24:36.869 01:07:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.869 01:07:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.438 01:07:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:37.438 01:07:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:37.438 01:07:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:37.697 [2024-11-18 01:07:12.055295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:37.697 [2024-11-18 01:07:12.055407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.697 [2024-11-18 01:07:12.055450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:37.697 [2024-11-18 01:07:12.055476] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.697 [2024-11-18 01:07:12.055944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.697 [2024-11-18 01:07:12.056003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:37.697 [2024-11-18 01:07:12.056092] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:37.697 [2024-11-18 01:07:12.056115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:37.697 pt2 00:24:37.697 01:07:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:37.697 01:07:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:37.697 01:07:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:37.957 [2024-11-18 01:07:12.319353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:37.957 [2024-11-18 01:07:12.319478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.957 [2024-11-18 01:07:12.319519] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:37.957 [2024-11-18 01:07:12.319547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.957 [2024-11-18 01:07:12.319994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.957 [2024-11-18 01:07:12.320051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:37.957 [2024-11-18 01:07:12.320131] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:37.957 [2024-11-18 01:07:12.320151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:37.957 pt3 00:24:37.957 01:07:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:37.957 01:07:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:37.957 01:07:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:38.216 [2024-11-18 01:07:12.499375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:38.216 [2024-11-18 01:07:12.499476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.216 [2024-11-18 01:07:12.499513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:38.216 [2024-11-18 01:07:12.499541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.216 [2024-11-18 01:07:12.499987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.216 [2024-11-18 01:07:12.500038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:38.216 [2024-11-18 01:07:12.500117] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:38.216 [2024-11-18 01:07:12.500144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:38.216 [2024-11-18 01:07:12.500293] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:38.216 [2024-11-18 01:07:12.500303] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:38.216 [2024-11-18 01:07:12.500378] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:24:38.216 [2024-11-18 01:07:12.501063] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:38.216 [2024-11-18 01:07:12.501084] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:38.216 [2024-11-18 01:07:12.501182] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.216 pt4 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:38.216 01:07:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:38.217 01:07:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:38.217 01:07:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:38.217 01:07:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:38.217 01:07:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:38.217 01:07:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.217 01:07:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.476 01:07:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.476 "name": "raid_bdev1", 00:24:38.476 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:38.476 "strip_size_kb": 64, 00:24:38.476 "state": "online", 00:24:38.476 "raid_level": "raid5f", 00:24:38.476 "superblock": true, 00:24:38.476 "num_base_bdevs": 4, 00:24:38.476 "num_base_bdevs_discovered": 4, 00:24:38.476 "num_base_bdevs_operational": 4, 00:24:38.476 "base_bdevs_list": [ 00:24:38.476 { 00:24:38.476 "name": "pt1", 00:24:38.476 "uuid": "3595ea87-6876-5c90-bd85-213bd9112330", 00:24:38.476 "is_configured": true, 00:24:38.476 "data_offset": 2048, 00:24:38.476 "data_size": 63488 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "name": "pt2", 00:24:38.476 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:38.476 "is_configured": true, 00:24:38.476 "data_offset": 2048, 00:24:38.476 "data_size": 63488 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "name": "pt3", 00:24:38.476 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:38.476 "is_configured": true, 00:24:38.476 "data_offset": 2048, 00:24:38.476 "data_size": 63488 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "name": "pt4", 00:24:38.476 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:38.476 "is_configured": true, 00:24:38.476 "data_offset": 2048, 00:24:38.476 "data_size": 63488 00:24:38.476 } 00:24:38.476 ] 00:24:38.476 }' 00:24:38.476 01:07:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.476 01:07:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.044 01:07:13 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:39.044 01:07:13 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:39.304 [2024-11-18 01:07:13.560134] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.304 01:07:13 -- bdev/bdev_raid.sh@430 -- # '[' b41828f7-e6de-4c8f-804f-6210ab14ce67 '!=' b41828f7-e6de-4c8f-804f-6210ab14ce67 ']' 00:24:39.304 01:07:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:39.304 01:07:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:39.304 01:07:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:39.304 01:07:13 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:39.563 [2024-11-18 01:07:13.824098] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.563 01:07:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.822 01:07:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:39.822 "name": "raid_bdev1", 00:24:39.822 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:39.822 "strip_size_kb": 64, 00:24:39.822 "state": "online", 00:24:39.822 "raid_level": "raid5f", 00:24:39.822 "superblock": true, 00:24:39.822 "num_base_bdevs": 4, 00:24:39.822 "num_base_bdevs_discovered": 3, 00:24:39.822 "num_base_bdevs_operational": 3, 00:24:39.822 "base_bdevs_list": [ 00:24:39.822 { 00:24:39.822 "name": null, 00:24:39.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.822 "is_configured": false, 00:24:39.822 "data_offset": 2048, 00:24:39.822 "data_size": 63488 00:24:39.822 }, 00:24:39.822 { 00:24:39.822 "name": "pt2", 00:24:39.822 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:39.822 "is_configured": true, 00:24:39.822 "data_offset": 2048, 00:24:39.822 "data_size": 63488 00:24:39.822 }, 00:24:39.822 { 00:24:39.822 "name": "pt3", 00:24:39.822 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:39.822 "is_configured": true, 00:24:39.822 "data_offset": 2048, 00:24:39.822 "data_size": 63488 00:24:39.822 }, 00:24:39.822 { 00:24:39.822 "name": "pt4", 00:24:39.822 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:39.822 "is_configured": true, 00:24:39.822 "data_offset": 2048, 00:24:39.822 "data_size": 63488 00:24:39.822 } 00:24:39.822 ] 00:24:39.822 }' 00:24:39.822 01:07:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:39.822 01:07:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.390 01:07:14 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:40.649 [2024-11-18 01:07:14.956283] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:40.649 [2024-11-18 01:07:14.956322] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.650 [2024-11-18 01:07:14.956407] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.650 [2024-11-18 01:07:14.956496] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:40.650 [2024-11-18 01:07:14.956506] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:40.650 01:07:14 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:40.650 01:07:14 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.909 01:07:15 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:40.909 01:07:15 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:40.909 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:40.909 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:40.909 01:07:15 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:41.168 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:41.168 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:41.168 01:07:15 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:41.427 01:07:15 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:41.686 [2024-11-18 01:07:16.000460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:41.686 [2024-11-18 01:07:16.000587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.686 [2024-11-18 01:07:16.000628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:41.686 [2024-11-18 01:07:16.000658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.686 [2024-11-18 01:07:16.003403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.686 [2024-11-18 01:07:16.003487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:41.686 [2024-11-18 01:07:16.003581] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:41.686 [2024-11-18 01:07:16.003616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:41.686 pt2 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.686 01:07:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.946 01:07:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.946 "name": "raid_bdev1", 00:24:41.946 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:41.946 "strip_size_kb": 64, 00:24:41.946 "state": "configuring", 00:24:41.946 "raid_level": "raid5f", 00:24:41.946 "superblock": true, 00:24:41.946 "num_base_bdevs": 4, 00:24:41.946 "num_base_bdevs_discovered": 1, 00:24:41.946 "num_base_bdevs_operational": 3, 00:24:41.946 "base_bdevs_list": [ 00:24:41.946 { 00:24:41.946 "name": null, 00:24:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.946 "is_configured": false, 00:24:41.946 "data_offset": 2048, 00:24:41.946 "data_size": 63488 00:24:41.946 }, 00:24:41.946 { 00:24:41.946 "name": "pt2", 00:24:41.946 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:41.946 "is_configured": true, 00:24:41.946 "data_offset": 2048, 00:24:41.946 "data_size": 63488 00:24:41.946 }, 00:24:41.946 { 00:24:41.946 "name": null, 00:24:41.946 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:41.946 "is_configured": false, 00:24:41.946 "data_offset": 2048, 00:24:41.946 "data_size": 63488 00:24:41.946 }, 00:24:41.946 { 00:24:41.946 "name": null, 00:24:41.946 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:41.946 "is_configured": false, 00:24:41.946 "data_offset": 2048, 00:24:41.946 "data_size": 63488 00:24:41.946 } 00:24:41.946 ] 00:24:41.946 }' 00:24:41.946 01:07:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.946 01:07:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.515 01:07:16 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:42.515 01:07:16 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:42.515 01:07:16 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:42.774 [2024-11-18 01:07:17.024672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:42.775 [2024-11-18 01:07:17.024810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.775 [2024-11-18 01:07:17.024858] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:42.775 [2024-11-18 01:07:17.024882] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.775 [2024-11-18 01:07:17.025355] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.775 [2024-11-18 01:07:17.025397] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:42.775 [2024-11-18 01:07:17.025490] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:42.775 [2024-11-18 01:07:17.025512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:42.775 pt3 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.775 01:07:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.034 01:07:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:43.034 "name": "raid_bdev1", 00:24:43.034 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:43.034 "strip_size_kb": 64, 00:24:43.034 "state": "configuring", 00:24:43.034 "raid_level": "raid5f", 00:24:43.034 "superblock": true, 00:24:43.034 "num_base_bdevs": 4, 00:24:43.034 "num_base_bdevs_discovered": 2, 00:24:43.034 "num_base_bdevs_operational": 3, 00:24:43.034 "base_bdevs_list": [ 00:24:43.034 { 00:24:43.034 "name": null, 00:24:43.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.034 "is_configured": false, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 }, 00:24:43.034 { 00:24:43.034 "name": "pt2", 00:24:43.034 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:43.034 "is_configured": true, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 }, 00:24:43.034 { 00:24:43.034 "name": "pt3", 00:24:43.034 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:43.034 "is_configured": true, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 }, 00:24:43.034 { 00:24:43.034 "name": null, 00:24:43.034 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:43.034 "is_configured": false, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 } 00:24:43.034 ] 00:24:43.034 }' 00:24:43.034 01:07:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:43.034 01:07:17 -- common/autotest_common.sh@10 -- # set +x 00:24:43.605 01:07:17 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:43.605 01:07:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:43.605 01:07:17 -- bdev/bdev_raid.sh@462 -- # i=3 00:24:43.605 01:07:17 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:43.864 [2024-11-18 01:07:18.040894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:43.864 [2024-11-18 01:07:18.041012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.864 [2024-11-18 01:07:18.041056] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:43.864 [2024-11-18 01:07:18.041078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.864 [2024-11-18 01:07:18.041573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.864 [2024-11-18 01:07:18.041615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:43.864 [2024-11-18 01:07:18.041710] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:43.864 [2024-11-18 01:07:18.041732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:43.864 [2024-11-18 01:07:18.041874] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:24:43.864 [2024-11-18 01:07:18.041887] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:43.864 [2024-11-18 01:07:18.041948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:24:43.865 [2024-11-18 01:07:18.042764] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:24:43.865 [2024-11-18 01:07:18.042787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:24:43.865 [2024-11-18 01:07:18.043010] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.865 pt4 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.865 01:07:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.125 01:07:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.126 "name": "raid_bdev1", 00:24:44.126 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:44.126 "strip_size_kb": 64, 00:24:44.126 "state": "online", 00:24:44.126 "raid_level": "raid5f", 00:24:44.126 "superblock": true, 00:24:44.126 "num_base_bdevs": 4, 00:24:44.126 "num_base_bdevs_discovered": 3, 00:24:44.126 "num_base_bdevs_operational": 3, 00:24:44.126 "base_bdevs_list": [ 00:24:44.126 { 00:24:44.126 "name": null, 00:24:44.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.126 "is_configured": false, 00:24:44.126 "data_offset": 2048, 00:24:44.126 "data_size": 63488 00:24:44.126 }, 00:24:44.126 { 00:24:44.126 "name": "pt2", 00:24:44.126 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:44.126 "is_configured": true, 00:24:44.126 "data_offset": 2048, 00:24:44.126 "data_size": 63488 00:24:44.126 }, 00:24:44.126 { 00:24:44.126 "name": "pt3", 00:24:44.126 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:44.126 "is_configured": true, 00:24:44.126 "data_offset": 2048, 00:24:44.126 "data_size": 63488 00:24:44.126 }, 00:24:44.126 { 00:24:44.126 "name": "pt4", 00:24:44.126 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:44.126 "is_configured": true, 00:24:44.126 "data_offset": 2048, 00:24:44.126 "data_size": 63488 00:24:44.126 } 00:24:44.126 ] 00:24:44.126 }' 00:24:44.126 01:07:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.126 01:07:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.691 01:07:18 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:24:44.691 01:07:18 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:44.949 [2024-11-18 01:07:19.102599] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:44.949 [2024-11-18 01:07:19.102647] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:44.950 [2024-11-18 01:07:19.102730] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:44.950 [2024-11-18 01:07:19.102816] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:44.950 [2024-11-18 01:07:19.102826] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:24:44.950 01:07:19 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.950 01:07:19 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:44.950 01:07:19 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:44.950 01:07:19 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:44.950 01:07:19 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:45.209 [2024-11-18 01:07:19.462498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:45.209 [2024-11-18 01:07:19.462618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.209 [2024-11-18 01:07:19.462672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:45.209 [2024-11-18 01:07:19.462697] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.209 [2024-11-18 01:07:19.465804] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.209 [2024-11-18 01:07:19.465879] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:45.209 [2024-11-18 01:07:19.466192] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:45.209 [2024-11-18 01:07:19.466259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:45.209 pt1 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.209 01:07:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.468 01:07:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.468 "name": "raid_bdev1", 00:24:45.468 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:45.468 "strip_size_kb": 64, 00:24:45.468 "state": "configuring", 00:24:45.468 "raid_level": "raid5f", 00:24:45.468 "superblock": true, 00:24:45.468 "num_base_bdevs": 4, 00:24:45.468 "num_base_bdevs_discovered": 1, 00:24:45.468 "num_base_bdevs_operational": 4, 00:24:45.468 "base_bdevs_list": [ 00:24:45.468 { 00:24:45.468 "name": "pt1", 00:24:45.468 "uuid": "3595ea87-6876-5c90-bd85-213bd9112330", 00:24:45.468 "is_configured": true, 00:24:45.468 "data_offset": 2048, 00:24:45.468 "data_size": 63488 00:24:45.468 }, 00:24:45.468 { 00:24:45.468 "name": null, 00:24:45.468 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:45.468 "is_configured": false, 00:24:45.468 "data_offset": 2048, 00:24:45.468 "data_size": 63488 00:24:45.468 }, 00:24:45.468 { 00:24:45.468 "name": null, 00:24:45.468 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:45.468 "is_configured": false, 00:24:45.468 "data_offset": 2048, 00:24:45.468 "data_size": 63488 00:24:45.468 }, 00:24:45.468 { 00:24:45.468 "name": null, 00:24:45.468 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:45.468 "is_configured": false, 00:24:45.468 "data_offset": 2048, 00:24:45.468 "data_size": 63488 00:24:45.468 } 00:24:45.468 ] 00:24:45.468 }' 00:24:45.468 01:07:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.468 01:07:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.037 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:46.037 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:46.037 01:07:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:46.296 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:46.296 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:46.296 01:07:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:46.296 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:46.296 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:46.296 01:07:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:46.555 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:46.555 01:07:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:46.555 01:07:20 -- bdev/bdev_raid.sh@489 -- # i=3 00:24:46.555 01:07:20 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:46.815 [2024-11-18 01:07:21.091411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:46.815 [2024-11-18 01:07:21.091561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.815 [2024-11-18 01:07:21.091599] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:46.815 [2024-11-18 01:07:21.091628] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.815 [2024-11-18 01:07:21.092428] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.815 [2024-11-18 01:07:21.092496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:46.815 [2024-11-18 01:07:21.092730] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:46.815 [2024-11-18 01:07:21.092751] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:46.815 [2024-11-18 01:07:21.092760] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.815 [2024-11-18 01:07:21.092794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:24:46.815 [2024-11-18 01:07:21.093015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:46.815 pt4 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.815 01:07:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.074 01:07:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.074 "name": "raid_bdev1", 00:24:47.074 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:47.074 "strip_size_kb": 64, 00:24:47.074 "state": "configuring", 00:24:47.074 "raid_level": "raid5f", 00:24:47.074 "superblock": true, 00:24:47.074 "num_base_bdevs": 4, 00:24:47.074 "num_base_bdevs_discovered": 1, 00:24:47.074 "num_base_bdevs_operational": 3, 00:24:47.074 "base_bdevs_list": [ 00:24:47.074 { 00:24:47.074 "name": null, 00:24:47.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.074 "is_configured": false, 00:24:47.074 "data_offset": 2048, 00:24:47.074 "data_size": 63488 00:24:47.074 }, 00:24:47.074 { 00:24:47.074 "name": null, 00:24:47.074 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:47.074 "is_configured": false, 00:24:47.074 "data_offset": 2048, 00:24:47.074 "data_size": 63488 00:24:47.074 }, 00:24:47.074 { 00:24:47.074 "name": null, 00:24:47.074 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:47.074 "is_configured": false, 00:24:47.074 "data_offset": 2048, 00:24:47.074 "data_size": 63488 00:24:47.074 }, 00:24:47.074 { 00:24:47.074 "name": "pt4", 00:24:47.074 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:47.074 "is_configured": true, 00:24:47.074 "data_offset": 2048, 00:24:47.074 "data_size": 63488 00:24:47.074 } 00:24:47.074 ] 00:24:47.074 }' 00:24:47.074 01:07:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.074 01:07:21 -- common/autotest_common.sh@10 -- # set +x 00:24:47.642 01:07:21 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:47.642 01:07:21 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:47.642 01:07:21 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:47.901 [2024-11-18 01:07:22.063624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:47.901 [2024-11-18 01:07:22.063768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.901 [2024-11-18 01:07:22.063809] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:47.901 [2024-11-18 01:07:22.063838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.901 [2024-11-18 01:07:22.064713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.901 [2024-11-18 01:07:22.064789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:47.901 [2024-11-18 01:07:22.064891] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:47.901 [2024-11-18 01:07:22.065221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:47.901 pt2 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:47.901 [2024-11-18 01:07:22.239810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:47.901 [2024-11-18 01:07:22.239938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.901 [2024-11-18 01:07:22.239981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:47.901 [2024-11-18 01:07:22.240012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.901 [2024-11-18 01:07:22.240893] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.901 [2024-11-18 01:07:22.240964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:47.901 [2024-11-18 01:07:22.241067] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:47.901 [2024-11-18 01:07:22.241320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:47.901 [2024-11-18 01:07:22.241628] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:24:47.901 [2024-11-18 01:07:22.241646] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:47.901 [2024-11-18 01:07:22.241729] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:24:47.901 [2024-11-18 01:07:22.242894] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:24:47.901 [2024-11-18 01:07:22.242922] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:24:47.901 [2024-11-18 01:07:22.243243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.901 pt3 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.901 01:07:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.160 01:07:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.160 "name": "raid_bdev1", 00:24:48.160 "uuid": "b41828f7-e6de-4c8f-804f-6210ab14ce67", 00:24:48.160 "strip_size_kb": 64, 00:24:48.160 "state": "online", 00:24:48.160 "raid_level": "raid5f", 00:24:48.160 "superblock": true, 00:24:48.160 "num_base_bdevs": 4, 00:24:48.160 "num_base_bdevs_discovered": 3, 00:24:48.160 "num_base_bdevs_operational": 3, 00:24:48.160 "base_bdevs_list": [ 00:24:48.160 { 00:24:48.160 "name": null, 00:24:48.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.160 "is_configured": false, 00:24:48.160 "data_offset": 2048, 00:24:48.160 "data_size": 63488 00:24:48.160 }, 00:24:48.160 { 00:24:48.160 "name": "pt2", 00:24:48.160 "uuid": "c52af940-bc1a-5195-87af-a6dcd8ac9699", 00:24:48.160 "is_configured": true, 00:24:48.160 "data_offset": 2048, 00:24:48.160 "data_size": 63488 00:24:48.160 }, 00:24:48.160 { 00:24:48.160 "name": "pt3", 00:24:48.160 "uuid": "d83d5ab8-5c0f-5ae4-802d-16d3b8339c56", 00:24:48.160 "is_configured": true, 00:24:48.160 "data_offset": 2048, 00:24:48.160 "data_size": 63488 00:24:48.160 }, 00:24:48.160 { 00:24:48.160 "name": "pt4", 00:24:48.160 "uuid": "8994863f-e0be-5ad0-ae07-a4c9580e4221", 00:24:48.160 "is_configured": true, 00:24:48.160 "data_offset": 2048, 00:24:48.160 "data_size": 63488 00:24:48.160 } 00:24:48.160 ] 00:24:48.160 }' 00:24:48.160 01:07:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.160 01:07:22 -- common/autotest_common.sh@10 -- # set +x 00:24:48.728 01:07:23 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:48.728 01:07:23 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:24:48.987 [2024-11-18 01:07:23.334903] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:48.987 01:07:23 -- bdev/bdev_raid.sh@506 -- # '[' b41828f7-e6de-4c8f-804f-6210ab14ce67 '!=' b41828f7-e6de-4c8f-804f-6210ab14ce67 ']' 00:24:48.987 01:07:23 -- bdev/bdev_raid.sh@511 -- # killprocess 140776 00:24:48.987 01:07:23 -- common/autotest_common.sh@936 -- # '[' -z 140776 ']' 00:24:48.987 01:07:23 -- common/autotest_common.sh@940 -- # kill -0 140776 00:24:48.987 01:07:23 -- common/autotest_common.sh@941 -- # uname 00:24:48.987 01:07:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.987 01:07:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140776 00:24:48.987 killing process with pid 140776 00:24:48.987 01:07:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:48.987 01:07:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:48.987 01:07:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140776' 00:24:48.987 01:07:23 -- common/autotest_common.sh@955 -- # kill 140776 00:24:48.987 01:07:23 -- common/autotest_common.sh@960 -- # wait 140776 00:24:48.987 [2024-11-18 01:07:23.384318] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:48.987 [2024-11-18 01:07:23.384692] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.987 [2024-11-18 01:07:23.384911] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:48.987 [2024-11-18 01:07:23.384932] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:24:49.245 [2024-11-18 01:07:23.467310] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:49.504 01:07:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:24:49.504 00:24:49.504 real 0m19.867s 00:24:49.504 user 0m35.722s 00:24:49.504 sys 0m3.696s 00:24:49.504 01:07:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:49.504 01:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.504 ************************************ 00:24:49.504 END TEST raid5f_superblock_test 00:24:49.504 ************************************ 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:24:49.763 01:07:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:49.763 01:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:49.763 01:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.763 ************************************ 00:24:49.763 START TEST raid5f_rebuild_test 00:24:49.763 ************************************ 00:24:49.763 01:07:23 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@544 -- # raid_pid=141423 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@545 -- # waitforlisten 141423 /var/tmp/spdk-raid.sock 00:24:49.763 01:07:23 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:49.763 01:07:23 -- common/autotest_common.sh@829 -- # '[' -z 141423 ']' 00:24:49.763 01:07:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:49.763 01:07:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:49.763 01:07:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:49.763 01:07:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.763 01:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.763 [2024-11-18 01:07:24.009662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:49.763 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:49.763 Zero copy mechanism will not be used. 00:24:49.763 [2024-11-18 01:07:24.009930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141423 ] 00:24:49.763 [2024-11-18 01:07:24.165051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.022 [2024-11-18 01:07:24.242076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.022 [2024-11-18 01:07:24.320593] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:50.590 01:07:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.590 01:07:24 -- common/autotest_common.sh@862 -- # return 0 00:24:50.590 01:07:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:50.590 01:07:24 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:50.590 01:07:24 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:50.850 BaseBdev1 00:24:50.850 01:07:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:50.850 01:07:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:50.850 01:07:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:51.108 BaseBdev2 00:24:51.108 01:07:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:51.108 01:07:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:51.108 01:07:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:51.367 BaseBdev3 00:24:51.367 01:07:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:51.367 01:07:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:51.367 01:07:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:51.626 BaseBdev4 00:24:51.627 01:07:25 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:51.627 spare_malloc 00:24:51.627 01:07:25 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:51.885 spare_delay 00:24:51.886 01:07:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:52.145 [2024-11-18 01:07:26.398456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:52.145 [2024-11-18 01:07:26.398593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.145 [2024-11-18 01:07:26.398639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:52.145 [2024-11-18 01:07:26.398689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.145 [2024-11-18 01:07:26.401610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.145 [2024-11-18 01:07:26.401674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:52.145 spare 00:24:52.145 01:07:26 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:52.404 [2024-11-18 01:07:26.578611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:52.404 [2024-11-18 01:07:26.581044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:52.404 [2024-11-18 01:07:26.581115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:52.404 [2024-11-18 01:07:26.581154] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:52.404 [2024-11-18 01:07:26.581241] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:24:52.404 [2024-11-18 01:07:26.581250] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:52.404 [2024-11-18 01:07:26.581430] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:24:52.404 [2024-11-18 01:07:26.582298] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:24:52.404 [2024-11-18 01:07:26.582320] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:24:52.404 [2024-11-18 01:07:26.582530] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.404 01:07:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.663 01:07:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.663 "name": "raid_bdev1", 00:24:52.663 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:24:52.663 "strip_size_kb": 64, 00:24:52.663 "state": "online", 00:24:52.663 "raid_level": "raid5f", 00:24:52.663 "superblock": false, 00:24:52.663 "num_base_bdevs": 4, 00:24:52.663 "num_base_bdevs_discovered": 4, 00:24:52.663 "num_base_bdevs_operational": 4, 00:24:52.663 "base_bdevs_list": [ 00:24:52.663 { 00:24:52.663 "name": "BaseBdev1", 00:24:52.663 "uuid": "c5f0c406-a0f9-4b75-a683-84718afcec58", 00:24:52.663 "is_configured": true, 00:24:52.663 "data_offset": 0, 00:24:52.663 "data_size": 65536 00:24:52.663 }, 00:24:52.663 { 00:24:52.663 "name": "BaseBdev2", 00:24:52.663 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:24:52.663 "is_configured": true, 00:24:52.663 "data_offset": 0, 00:24:52.663 "data_size": 65536 00:24:52.663 }, 00:24:52.663 { 00:24:52.663 "name": "BaseBdev3", 00:24:52.663 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:24:52.663 "is_configured": true, 00:24:52.663 "data_offset": 0, 00:24:52.663 "data_size": 65536 00:24:52.663 }, 00:24:52.663 { 00:24:52.663 "name": "BaseBdev4", 00:24:52.663 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:24:52.663 "is_configured": true, 00:24:52.663 "data_offset": 0, 00:24:52.663 "data_size": 65536 00:24:52.663 } 00:24:52.663 ] 00:24:52.663 }' 00:24:52.663 01:07:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.663 01:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:53.231 01:07:27 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:53.231 01:07:27 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:53.231 [2024-11-18 01:07:27.530914] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:53.231 01:07:27 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:24:53.231 01:07:27 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.231 01:07:27 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:53.490 01:07:27 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:53.490 01:07:27 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:53.490 01:07:27 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:53.490 01:07:27 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@12 -- # local i 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.490 01:07:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:53.749 [2024-11-18 01:07:27.918926] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:53.749 /dev/nbd0 00:24:53.749 01:07:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:53.749 01:07:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:53.749 01:07:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:53.749 01:07:27 -- common/autotest_common.sh@867 -- # local i 00:24:53.749 01:07:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:53.749 01:07:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:53.749 01:07:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:53.749 01:07:27 -- common/autotest_common.sh@871 -- # break 00:24:53.749 01:07:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:53.749 01:07:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:53.749 01:07:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.749 1+0 records in 00:24:53.749 1+0 records out 00:24:53.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209142 s, 19.6 MB/s 00:24:53.749 01:07:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.749 01:07:27 -- common/autotest_common.sh@884 -- # size=4096 00:24:53.749 01:07:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.749 01:07:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:53.749 01:07:27 -- common/autotest_common.sh@887 -- # return 0 00:24:53.749 01:07:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.749 01:07:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.749 01:07:27 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:53.749 01:07:27 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:53.749 01:07:27 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:53.749 01:07:27 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:24:54.317 512+0 records in 00:24:54.317 512+0 records out 00:24:54.317 100663296 bytes (101 MB, 96 MiB) copied, 0.474075 s, 212 MB/s 00:24:54.317 01:07:28 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:54.317 01:07:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:54.317 01:07:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:54.317 01:07:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.317 01:07:28 -- bdev/nbd_common.sh@51 -- # local i 00:24:54.317 01:07:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.317 01:07:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:54.576 01:07:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.576 [2024-11-18 01:07:28.747535] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.576 01:07:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.576 01:07:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.577 01:07:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.577 01:07:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.577 01:07:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.577 01:07:28 -- bdev/nbd_common.sh@41 -- # break 00:24:54.577 01:07:28 -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:54.577 [2024-11-18 01:07:28.915133] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.577 01:07:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.836 01:07:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.836 "name": "raid_bdev1", 00:24:54.836 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:24:54.836 "strip_size_kb": 64, 00:24:54.836 "state": "online", 00:24:54.836 "raid_level": "raid5f", 00:24:54.836 "superblock": false, 00:24:54.836 "num_base_bdevs": 4, 00:24:54.836 "num_base_bdevs_discovered": 3, 00:24:54.836 "num_base_bdevs_operational": 3, 00:24:54.836 "base_bdevs_list": [ 00:24:54.836 { 00:24:54.836 "name": null, 00:24:54.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.836 "is_configured": false, 00:24:54.836 "data_offset": 0, 00:24:54.836 "data_size": 65536 00:24:54.836 }, 00:24:54.836 { 00:24:54.836 "name": "BaseBdev2", 00:24:54.836 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:24:54.836 "is_configured": true, 00:24:54.836 "data_offset": 0, 00:24:54.836 "data_size": 65536 00:24:54.836 }, 00:24:54.836 { 00:24:54.836 "name": "BaseBdev3", 00:24:54.836 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:24:54.836 "is_configured": true, 00:24:54.836 "data_offset": 0, 00:24:54.836 "data_size": 65536 00:24:54.836 }, 00:24:54.836 { 00:24:54.836 "name": "BaseBdev4", 00:24:54.836 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:24:54.836 "is_configured": true, 00:24:54.836 "data_offset": 0, 00:24:54.836 "data_size": 65536 00:24:54.836 } 00:24:54.836 ] 00:24:54.836 }' 00:24:54.836 01:07:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.836 01:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:55.403 01:07:29 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:55.662 [2024-11-18 01:07:29.871408] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:55.662 [2024-11-18 01:07:29.871482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.662 [2024-11-18 01:07:29.877587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60 00:24:55.662 [2024-11-18 01:07:29.880606] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:55.662 01:07:29 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:56.599 01:07:30 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.599 01:07:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:56.599 01:07:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:56.599 01:07:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:56.599 01:07:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:56.599 01:07:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.599 01:07:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.858 01:07:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:56.858 "name": "raid_bdev1", 00:24:56.858 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:24:56.858 "strip_size_kb": 64, 00:24:56.858 "state": "online", 00:24:56.858 "raid_level": "raid5f", 00:24:56.858 "superblock": false, 00:24:56.858 "num_base_bdevs": 4, 00:24:56.858 "num_base_bdevs_discovered": 4, 00:24:56.858 "num_base_bdevs_operational": 4, 00:24:56.858 "process": { 00:24:56.858 "type": "rebuild", 00:24:56.858 "target": "spare", 00:24:56.858 "progress": { 00:24:56.858 "blocks": 23040, 00:24:56.858 "percent": 11 00:24:56.858 } 00:24:56.858 }, 00:24:56.858 "base_bdevs_list": [ 00:24:56.858 { 00:24:56.858 "name": "spare", 00:24:56.858 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:24:56.858 "is_configured": true, 00:24:56.858 "data_offset": 0, 00:24:56.858 "data_size": 65536 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "name": "BaseBdev2", 00:24:56.858 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:24:56.858 "is_configured": true, 00:24:56.858 "data_offset": 0, 00:24:56.858 "data_size": 65536 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "name": "BaseBdev3", 00:24:56.858 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:24:56.858 "is_configured": true, 00:24:56.858 "data_offset": 0, 00:24:56.858 "data_size": 65536 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "name": "BaseBdev4", 00:24:56.858 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:24:56.858 "is_configured": true, 00:24:56.858 "data_offset": 0, 00:24:56.858 "data_size": 65536 00:24:56.858 } 00:24:56.858 ] 00:24:56.858 }' 00:24:56.858 01:07:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:56.858 01:07:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.858 01:07:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:56.858 01:07:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.858 01:07:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:57.117 [2024-11-18 01:07:31.369803] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.117 [2024-11-18 01:07:31.391267] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:57.117 [2024-11-18 01:07:31.391357] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.118 01:07:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.377 01:07:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.377 "name": "raid_bdev1", 00:24:57.377 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:24:57.377 "strip_size_kb": 64, 00:24:57.377 "state": "online", 00:24:57.377 "raid_level": "raid5f", 00:24:57.377 "superblock": false, 00:24:57.377 "num_base_bdevs": 4, 00:24:57.377 "num_base_bdevs_discovered": 3, 00:24:57.377 "num_base_bdevs_operational": 3, 00:24:57.377 "base_bdevs_list": [ 00:24:57.377 { 00:24:57.377 "name": null, 00:24:57.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.377 "is_configured": false, 00:24:57.377 "data_offset": 0, 00:24:57.377 "data_size": 65536 00:24:57.377 }, 00:24:57.377 { 00:24:57.377 "name": "BaseBdev2", 00:24:57.377 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:24:57.377 "is_configured": true, 00:24:57.377 "data_offset": 0, 00:24:57.377 "data_size": 65536 00:24:57.377 }, 00:24:57.377 { 00:24:57.377 "name": "BaseBdev3", 00:24:57.377 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:24:57.377 "is_configured": true, 00:24:57.377 "data_offset": 0, 00:24:57.377 "data_size": 65536 00:24:57.377 }, 00:24:57.377 { 00:24:57.377 "name": "BaseBdev4", 00:24:57.377 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:24:57.377 "is_configured": true, 00:24:57.377 "data_offset": 0, 00:24:57.377 "data_size": 65536 00:24:57.377 } 00:24:57.377 ] 00:24:57.377 }' 00:24:57.377 01:07:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.377 01:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:57.945 01:07:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:57.945 01:07:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:57.945 01:07:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:57.945 01:07:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:57.945 01:07:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:57.945 01:07:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.945 01:07:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.205 01:07:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:58.205 "name": "raid_bdev1", 00:24:58.205 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:24:58.205 "strip_size_kb": 64, 00:24:58.205 "state": "online", 00:24:58.205 "raid_level": "raid5f", 00:24:58.205 "superblock": false, 00:24:58.205 "num_base_bdevs": 4, 00:24:58.205 "num_base_bdevs_discovered": 3, 00:24:58.205 "num_base_bdevs_operational": 3, 00:24:58.205 "base_bdevs_list": [ 00:24:58.205 { 00:24:58.205 "name": null, 00:24:58.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.205 "is_configured": false, 00:24:58.205 "data_offset": 0, 00:24:58.205 "data_size": 65536 00:24:58.205 }, 00:24:58.205 { 00:24:58.205 "name": "BaseBdev2", 00:24:58.205 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:24:58.205 "is_configured": true, 00:24:58.205 "data_offset": 0, 00:24:58.205 "data_size": 65536 00:24:58.205 }, 00:24:58.205 { 00:24:58.205 "name": "BaseBdev3", 00:24:58.205 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:24:58.205 "is_configured": true, 00:24:58.205 "data_offset": 0, 00:24:58.205 "data_size": 65536 00:24:58.205 }, 00:24:58.205 { 00:24:58.205 "name": "BaseBdev4", 00:24:58.205 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:24:58.205 "is_configured": true, 00:24:58.205 "data_offset": 0, 00:24:58.205 "data_size": 65536 00:24:58.205 } 00:24:58.205 ] 00:24:58.205 }' 00:24:58.205 01:07:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:58.205 01:07:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:58.205 01:07:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:58.465 01:07:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:58.465 01:07:32 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:58.465 [2024-11-18 01:07:32.860670] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:58.465 [2024-11-18 01:07:32.860944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:58.465 [2024-11-18 01:07:32.866985] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00 00:24:58.724 [2024-11-18 01:07:32.869808] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:58.724 01:07:32 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:59.662 01:07:33 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.662 01:07:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:59.662 01:07:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:59.662 01:07:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:59.662 01:07:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:59.662 01:07:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.662 01:07:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.921 01:07:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:59.921 "name": "raid_bdev1", 00:24:59.921 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:24:59.921 "strip_size_kb": 64, 00:24:59.921 "state": "online", 00:24:59.921 "raid_level": "raid5f", 00:24:59.921 "superblock": false, 00:24:59.921 "num_base_bdevs": 4, 00:24:59.921 "num_base_bdevs_discovered": 4, 00:24:59.921 "num_base_bdevs_operational": 4, 00:24:59.921 "process": { 00:24:59.921 "type": "rebuild", 00:24:59.921 "target": "spare", 00:24:59.921 "progress": { 00:24:59.921 "blocks": 23040, 00:24:59.922 "percent": 11 00:24:59.922 } 00:24:59.922 }, 00:24:59.922 "base_bdevs_list": [ 00:24:59.922 { 00:24:59.922 "name": "spare", 00:24:59.922 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:24:59.922 "is_configured": true, 00:24:59.922 "data_offset": 0, 00:24:59.922 "data_size": 65536 00:24:59.922 }, 00:24:59.922 { 00:24:59.922 "name": "BaseBdev2", 00:24:59.922 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:24:59.922 "is_configured": true, 00:24:59.922 "data_offset": 0, 00:24:59.922 "data_size": 65536 00:24:59.922 }, 00:24:59.922 { 00:24:59.922 "name": "BaseBdev3", 00:24:59.922 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:24:59.922 "is_configured": true, 00:24:59.922 "data_offset": 0, 00:24:59.922 "data_size": 65536 00:24:59.922 }, 00:24:59.922 { 00:24:59.922 "name": "BaseBdev4", 00:24:59.922 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:24:59.922 "is_configured": true, 00:24:59.922 "data_offset": 0, 00:24:59.922 "data_size": 65536 00:24:59.922 } 00:24:59.922 ] 00:24:59.922 }' 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@657 -- # local timeout=657 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.922 01:07:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.182 01:07:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:00.182 "name": "raid_bdev1", 00:25:00.182 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:00.182 "strip_size_kb": 64, 00:25:00.182 "state": "online", 00:25:00.182 "raid_level": "raid5f", 00:25:00.182 "superblock": false, 00:25:00.182 "num_base_bdevs": 4, 00:25:00.182 "num_base_bdevs_discovered": 4, 00:25:00.182 "num_base_bdevs_operational": 4, 00:25:00.182 "process": { 00:25:00.182 "type": "rebuild", 00:25:00.182 "target": "spare", 00:25:00.182 "progress": { 00:25:00.182 "blocks": 26880, 00:25:00.182 "percent": 13 00:25:00.182 } 00:25:00.182 }, 00:25:00.182 "base_bdevs_list": [ 00:25:00.182 { 00:25:00.182 "name": "spare", 00:25:00.182 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:00.182 "is_configured": true, 00:25:00.182 "data_offset": 0, 00:25:00.182 "data_size": 65536 00:25:00.182 }, 00:25:00.182 { 00:25:00.182 "name": "BaseBdev2", 00:25:00.182 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:00.182 "is_configured": true, 00:25:00.182 "data_offset": 0, 00:25:00.182 "data_size": 65536 00:25:00.182 }, 00:25:00.182 { 00:25:00.182 "name": "BaseBdev3", 00:25:00.182 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:00.182 "is_configured": true, 00:25:00.182 "data_offset": 0, 00:25:00.182 "data_size": 65536 00:25:00.182 }, 00:25:00.182 { 00:25:00.182 "name": "BaseBdev4", 00:25:00.182 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:00.182 "is_configured": true, 00:25:00.182 "data_offset": 0, 00:25:00.182 "data_size": 65536 00:25:00.182 } 00:25:00.182 ] 00:25:00.182 }' 00:25:00.182 01:07:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:00.182 01:07:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.182 01:07:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:00.182 01:07:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.182 01:07:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.119 01:07:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.378 01:07:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.378 "name": "raid_bdev1", 00:25:01.378 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:01.378 "strip_size_kb": 64, 00:25:01.378 "state": "online", 00:25:01.378 "raid_level": "raid5f", 00:25:01.378 "superblock": false, 00:25:01.378 "num_base_bdevs": 4, 00:25:01.378 "num_base_bdevs_discovered": 4, 00:25:01.378 "num_base_bdevs_operational": 4, 00:25:01.378 "process": { 00:25:01.378 "type": "rebuild", 00:25:01.378 "target": "spare", 00:25:01.378 "progress": { 00:25:01.378 "blocks": 53760, 00:25:01.378 "percent": 27 00:25:01.378 } 00:25:01.378 }, 00:25:01.378 "base_bdevs_list": [ 00:25:01.378 { 00:25:01.378 "name": "spare", 00:25:01.378 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:01.378 "is_configured": true, 00:25:01.378 "data_offset": 0, 00:25:01.378 "data_size": 65536 00:25:01.378 }, 00:25:01.378 { 00:25:01.378 "name": "BaseBdev2", 00:25:01.378 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:01.378 "is_configured": true, 00:25:01.378 "data_offset": 0, 00:25:01.378 "data_size": 65536 00:25:01.378 }, 00:25:01.378 { 00:25:01.378 "name": "BaseBdev3", 00:25:01.378 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:01.378 "is_configured": true, 00:25:01.378 "data_offset": 0, 00:25:01.378 "data_size": 65536 00:25:01.378 }, 00:25:01.378 { 00:25:01.379 "name": "BaseBdev4", 00:25:01.379 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:01.379 "is_configured": true, 00:25:01.379 "data_offset": 0, 00:25:01.379 "data_size": 65536 00:25:01.379 } 00:25:01.379 ] 00:25:01.379 }' 00:25:01.379 01:07:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:01.379 01:07:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:01.379 01:07:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:01.638 01:07:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:01.638 01:07:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.575 01:07:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.834 01:07:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:02.834 "name": "raid_bdev1", 00:25:02.834 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:02.834 "strip_size_kb": 64, 00:25:02.834 "state": "online", 00:25:02.834 "raid_level": "raid5f", 00:25:02.834 "superblock": false, 00:25:02.834 "num_base_bdevs": 4, 00:25:02.834 "num_base_bdevs_discovered": 4, 00:25:02.834 "num_base_bdevs_operational": 4, 00:25:02.834 "process": { 00:25:02.834 "type": "rebuild", 00:25:02.834 "target": "spare", 00:25:02.834 "progress": { 00:25:02.834 "blocks": 78720, 00:25:02.834 "percent": 40 00:25:02.834 } 00:25:02.834 }, 00:25:02.834 "base_bdevs_list": [ 00:25:02.834 { 00:25:02.834 "name": "spare", 00:25:02.835 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:02.835 "is_configured": true, 00:25:02.835 "data_offset": 0, 00:25:02.835 "data_size": 65536 00:25:02.835 }, 00:25:02.835 { 00:25:02.835 "name": "BaseBdev2", 00:25:02.835 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:02.835 "is_configured": true, 00:25:02.835 "data_offset": 0, 00:25:02.835 "data_size": 65536 00:25:02.835 }, 00:25:02.835 { 00:25:02.835 "name": "BaseBdev3", 00:25:02.835 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:02.835 "is_configured": true, 00:25:02.835 "data_offset": 0, 00:25:02.835 "data_size": 65536 00:25:02.835 }, 00:25:02.835 { 00:25:02.835 "name": "BaseBdev4", 00:25:02.835 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:02.835 "is_configured": true, 00:25:02.835 "data_offset": 0, 00:25:02.835 "data_size": 65536 00:25:02.835 } 00:25:02.835 ] 00:25:02.835 }' 00:25:02.835 01:07:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:02.835 01:07:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:02.835 01:07:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:02.835 01:07:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:02.835 01:07:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.772 01:07:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.031 01:07:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:04.031 "name": "raid_bdev1", 00:25:04.031 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:04.031 "strip_size_kb": 64, 00:25:04.031 "state": "online", 00:25:04.031 "raid_level": "raid5f", 00:25:04.031 "superblock": false, 00:25:04.031 "num_base_bdevs": 4, 00:25:04.031 "num_base_bdevs_discovered": 4, 00:25:04.031 "num_base_bdevs_operational": 4, 00:25:04.031 "process": { 00:25:04.031 "type": "rebuild", 00:25:04.031 "target": "spare", 00:25:04.031 "progress": { 00:25:04.031 "blocks": 103680, 00:25:04.031 "percent": 52 00:25:04.031 } 00:25:04.031 }, 00:25:04.031 "base_bdevs_list": [ 00:25:04.031 { 00:25:04.031 "name": "spare", 00:25:04.031 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:04.031 "is_configured": true, 00:25:04.031 "data_offset": 0, 00:25:04.031 "data_size": 65536 00:25:04.031 }, 00:25:04.031 { 00:25:04.031 "name": "BaseBdev2", 00:25:04.031 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:04.031 "is_configured": true, 00:25:04.031 "data_offset": 0, 00:25:04.031 "data_size": 65536 00:25:04.031 }, 00:25:04.031 { 00:25:04.031 "name": "BaseBdev3", 00:25:04.031 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:04.031 "is_configured": true, 00:25:04.031 "data_offset": 0, 00:25:04.031 "data_size": 65536 00:25:04.031 }, 00:25:04.031 { 00:25:04.031 "name": "BaseBdev4", 00:25:04.031 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:04.031 "is_configured": true, 00:25:04.031 "data_offset": 0, 00:25:04.031 "data_size": 65536 00:25:04.031 } 00:25:04.031 ] 00:25:04.031 }' 00:25:04.031 01:07:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:04.290 01:07:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:04.290 01:07:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:04.290 01:07:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.290 01:07:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.226 01:07:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.485 01:07:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:05.485 "name": "raid_bdev1", 00:25:05.485 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:05.485 "strip_size_kb": 64, 00:25:05.485 "state": "online", 00:25:05.485 "raid_level": "raid5f", 00:25:05.485 "superblock": false, 00:25:05.485 "num_base_bdevs": 4, 00:25:05.485 "num_base_bdevs_discovered": 4, 00:25:05.485 "num_base_bdevs_operational": 4, 00:25:05.485 "process": { 00:25:05.485 "type": "rebuild", 00:25:05.485 "target": "spare", 00:25:05.485 "progress": { 00:25:05.485 "blocks": 128640, 00:25:05.485 "percent": 65 00:25:05.485 } 00:25:05.485 }, 00:25:05.485 "base_bdevs_list": [ 00:25:05.485 { 00:25:05.485 "name": "spare", 00:25:05.485 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:05.485 "is_configured": true, 00:25:05.485 "data_offset": 0, 00:25:05.485 "data_size": 65536 00:25:05.485 }, 00:25:05.485 { 00:25:05.485 "name": "BaseBdev2", 00:25:05.485 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:05.485 "is_configured": true, 00:25:05.485 "data_offset": 0, 00:25:05.485 "data_size": 65536 00:25:05.485 }, 00:25:05.485 { 00:25:05.485 "name": "BaseBdev3", 00:25:05.485 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:05.485 "is_configured": true, 00:25:05.485 "data_offset": 0, 00:25:05.485 "data_size": 65536 00:25:05.485 }, 00:25:05.485 { 00:25:05.485 "name": "BaseBdev4", 00:25:05.485 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:05.485 "is_configured": true, 00:25:05.485 "data_offset": 0, 00:25:05.485 "data_size": 65536 00:25:05.485 } 00:25:05.485 ] 00:25:05.485 }' 00:25:05.485 01:07:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:05.485 01:07:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:05.485 01:07:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:05.485 01:07:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:05.485 01:07:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.864 01:07:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.864 01:07:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:06.864 "name": "raid_bdev1", 00:25:06.864 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:06.864 "strip_size_kb": 64, 00:25:06.864 "state": "online", 00:25:06.864 "raid_level": "raid5f", 00:25:06.864 "superblock": false, 00:25:06.864 "num_base_bdevs": 4, 00:25:06.864 "num_base_bdevs_discovered": 4, 00:25:06.864 "num_base_bdevs_operational": 4, 00:25:06.864 "process": { 00:25:06.864 "type": "rebuild", 00:25:06.864 "target": "spare", 00:25:06.864 "progress": { 00:25:06.864 "blocks": 155520, 00:25:06.864 "percent": 79 00:25:06.864 } 00:25:06.864 }, 00:25:06.864 "base_bdevs_list": [ 00:25:06.864 { 00:25:06.864 "name": "spare", 00:25:06.864 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:06.864 "is_configured": true, 00:25:06.864 "data_offset": 0, 00:25:06.864 "data_size": 65536 00:25:06.864 }, 00:25:06.864 { 00:25:06.864 "name": "BaseBdev2", 00:25:06.864 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:06.864 "is_configured": true, 00:25:06.864 "data_offset": 0, 00:25:06.864 "data_size": 65536 00:25:06.864 }, 00:25:06.864 { 00:25:06.864 "name": "BaseBdev3", 00:25:06.864 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:06.864 "is_configured": true, 00:25:06.864 "data_offset": 0, 00:25:06.864 "data_size": 65536 00:25:06.864 }, 00:25:06.864 { 00:25:06.864 "name": "BaseBdev4", 00:25:06.864 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:06.864 "is_configured": true, 00:25:06.864 "data_offset": 0, 00:25:06.864 "data_size": 65536 00:25:06.864 } 00:25:06.864 ] 00:25:06.864 }' 00:25:06.864 01:07:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:06.864 01:07:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.864 01:07:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:06.864 01:07:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.864 01:07:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:08.243 "name": "raid_bdev1", 00:25:08.243 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:08.243 "strip_size_kb": 64, 00:25:08.243 "state": "online", 00:25:08.243 "raid_level": "raid5f", 00:25:08.243 "superblock": false, 00:25:08.243 "num_base_bdevs": 4, 00:25:08.243 "num_base_bdevs_discovered": 4, 00:25:08.243 "num_base_bdevs_operational": 4, 00:25:08.243 "process": { 00:25:08.243 "type": "rebuild", 00:25:08.243 "target": "spare", 00:25:08.243 "progress": { 00:25:08.243 "blocks": 182400, 00:25:08.243 "percent": 92 00:25:08.243 } 00:25:08.243 }, 00:25:08.243 "base_bdevs_list": [ 00:25:08.243 { 00:25:08.243 "name": "spare", 00:25:08.243 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:08.243 "is_configured": true, 00:25:08.243 "data_offset": 0, 00:25:08.243 "data_size": 65536 00:25:08.243 }, 00:25:08.243 { 00:25:08.243 "name": "BaseBdev2", 00:25:08.243 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:08.243 "is_configured": true, 00:25:08.243 "data_offset": 0, 00:25:08.243 "data_size": 65536 00:25:08.243 }, 00:25:08.243 { 00:25:08.243 "name": "BaseBdev3", 00:25:08.243 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:08.243 "is_configured": true, 00:25:08.243 "data_offset": 0, 00:25:08.243 "data_size": 65536 00:25:08.243 }, 00:25:08.243 { 00:25:08.243 "name": "BaseBdev4", 00:25:08.243 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:08.243 "is_configured": true, 00:25:08.243 "data_offset": 0, 00:25:08.243 "data_size": 65536 00:25:08.243 } 00:25:08.243 ] 00:25:08.243 }' 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:08.243 01:07:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:09.181 [2024-11-18 01:07:43.237876] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:09.181 [2024-11-18 01:07:43.238114] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:09.181 [2024-11-18 01:07:43.238310] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.181 01:07:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.440 01:07:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:09.440 "name": "raid_bdev1", 00:25:09.440 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:09.440 "strip_size_kb": 64, 00:25:09.440 "state": "online", 00:25:09.440 "raid_level": "raid5f", 00:25:09.440 "superblock": false, 00:25:09.440 "num_base_bdevs": 4, 00:25:09.440 "num_base_bdevs_discovered": 4, 00:25:09.440 "num_base_bdevs_operational": 4, 00:25:09.440 "base_bdevs_list": [ 00:25:09.440 { 00:25:09.440 "name": "spare", 00:25:09.440 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:09.440 "is_configured": true, 00:25:09.440 "data_offset": 0, 00:25:09.440 "data_size": 65536 00:25:09.440 }, 00:25:09.440 { 00:25:09.440 "name": "BaseBdev2", 00:25:09.440 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:09.440 "is_configured": true, 00:25:09.440 "data_offset": 0, 00:25:09.440 "data_size": 65536 00:25:09.440 }, 00:25:09.440 { 00:25:09.440 "name": "BaseBdev3", 00:25:09.440 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:09.440 "is_configured": true, 00:25:09.440 "data_offset": 0, 00:25:09.440 "data_size": 65536 00:25:09.440 }, 00:25:09.440 { 00:25:09.440 "name": "BaseBdev4", 00:25:09.440 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:09.440 "is_configured": true, 00:25:09.440 "data_offset": 0, 00:25:09.440 "data_size": 65536 00:25:09.440 } 00:25:09.440 ] 00:25:09.440 }' 00:25:09.440 01:07:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@660 -- # break 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.699 01:07:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.958 01:07:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:09.958 "name": "raid_bdev1", 00:25:09.958 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:09.958 "strip_size_kb": 64, 00:25:09.958 "state": "online", 00:25:09.958 "raid_level": "raid5f", 00:25:09.958 "superblock": false, 00:25:09.958 "num_base_bdevs": 4, 00:25:09.958 "num_base_bdevs_discovered": 4, 00:25:09.958 "num_base_bdevs_operational": 4, 00:25:09.958 "base_bdevs_list": [ 00:25:09.958 { 00:25:09.958 "name": "spare", 00:25:09.958 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:09.958 "is_configured": true, 00:25:09.958 "data_offset": 0, 00:25:09.958 "data_size": 65536 00:25:09.958 }, 00:25:09.958 { 00:25:09.958 "name": "BaseBdev2", 00:25:09.958 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:09.958 "is_configured": true, 00:25:09.958 "data_offset": 0, 00:25:09.958 "data_size": 65536 00:25:09.958 }, 00:25:09.958 { 00:25:09.958 "name": "BaseBdev3", 00:25:09.958 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:09.958 "is_configured": true, 00:25:09.958 "data_offset": 0, 00:25:09.958 "data_size": 65536 00:25:09.958 }, 00:25:09.958 { 00:25:09.958 "name": "BaseBdev4", 00:25:09.958 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:09.958 "is_configured": true, 00:25:09.958 "data_offset": 0, 00:25:09.958 "data_size": 65536 00:25:09.958 } 00:25:09.958 ] 00:25:09.958 }' 00:25:09.958 01:07:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:09.958 01:07:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:09.958 01:07:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.959 01:07:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.217 01:07:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:10.217 "name": "raid_bdev1", 00:25:10.217 "uuid": "1c24583f-9ea3-414c-b5d0-f95b9d5b4052", 00:25:10.217 "strip_size_kb": 64, 00:25:10.217 "state": "online", 00:25:10.217 "raid_level": "raid5f", 00:25:10.217 "superblock": false, 00:25:10.217 "num_base_bdevs": 4, 00:25:10.217 "num_base_bdevs_discovered": 4, 00:25:10.217 "num_base_bdevs_operational": 4, 00:25:10.217 "base_bdevs_list": [ 00:25:10.217 { 00:25:10.217 "name": "spare", 00:25:10.217 "uuid": "e33f6b64-ff6b-59ee-90e1-4b16375389f0", 00:25:10.217 "is_configured": true, 00:25:10.217 "data_offset": 0, 00:25:10.217 "data_size": 65536 00:25:10.217 }, 00:25:10.217 { 00:25:10.217 "name": "BaseBdev2", 00:25:10.217 "uuid": "7a0cf1e8-4554-4229-a82f-d5270b25c156", 00:25:10.217 "is_configured": true, 00:25:10.217 "data_offset": 0, 00:25:10.217 "data_size": 65536 00:25:10.217 }, 00:25:10.217 { 00:25:10.217 "name": "BaseBdev3", 00:25:10.218 "uuid": "5e952845-ec57-44a5-9ad5-b636fbdc0a8c", 00:25:10.218 "is_configured": true, 00:25:10.218 "data_offset": 0, 00:25:10.218 "data_size": 65536 00:25:10.218 }, 00:25:10.218 { 00:25:10.218 "name": "BaseBdev4", 00:25:10.218 "uuid": "8deb475e-64bb-4522-8e2c-9653b7dbb59e", 00:25:10.218 "is_configured": true, 00:25:10.218 "data_offset": 0, 00:25:10.218 "data_size": 65536 00:25:10.218 } 00:25:10.218 ] 00:25:10.218 }' 00:25:10.218 01:07:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:10.218 01:07:44 -- common/autotest_common.sh@10 -- # set +x 00:25:10.784 01:07:45 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:11.043 [2024-11-18 01:07:45.206144] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:11.043 [2024-11-18 01:07:45.206336] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:11.043 [2024-11-18 01:07:45.206539] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:11.043 [2024-11-18 01:07:45.206722] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:11.043 [2024-11-18 01:07:45.206806] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:25:11.043 01:07:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.043 01:07:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:11.302 01:07:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:11.302 01:07:45 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:11.302 01:07:45 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@12 -- # local i 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:11.302 /dev/nbd0 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:11.302 01:07:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:11.302 01:07:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:11.302 01:07:45 -- common/autotest_common.sh@867 -- # local i 00:25:11.302 01:07:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:11.302 01:07:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:11.302 01:07:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:11.302 01:07:45 -- common/autotest_common.sh@871 -- # break 00:25:11.302 01:07:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:11.302 01:07:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:11.302 01:07:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:11.302 1+0 records in 00:25:11.302 1+0 records out 00:25:11.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345533 s, 11.9 MB/s 00:25:11.562 01:07:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.562 01:07:45 -- common/autotest_common.sh@884 -- # size=4096 00:25:11.562 01:07:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.562 01:07:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:11.562 01:07:45 -- common/autotest_common.sh@887 -- # return 0 00:25:11.562 01:07:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:11.562 01:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:11.562 01:07:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:11.820 /dev/nbd1 00:25:11.820 01:07:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:11.820 01:07:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:11.820 01:07:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:11.820 01:07:45 -- common/autotest_common.sh@867 -- # local i 00:25:11.820 01:07:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:11.820 01:07:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:11.820 01:07:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:11.820 01:07:45 -- common/autotest_common.sh@871 -- # break 00:25:11.820 01:07:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:11.820 01:07:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:11.820 01:07:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:11.820 1+0 records in 00:25:11.820 1+0 records out 00:25:11.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446464 s, 9.2 MB/s 00:25:11.820 01:07:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.820 01:07:46 -- common/autotest_common.sh@884 -- # size=4096 00:25:11.821 01:07:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.821 01:07:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:11.821 01:07:46 -- common/autotest_common.sh@887 -- # return 0 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:11.821 01:07:46 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:11.821 01:07:46 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@51 -- # local i 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:11.821 01:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@41 -- # break 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:12.079 01:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@41 -- # break 00:25:12.339 01:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:25:12.339 01:07:46 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:12.339 01:07:46 -- bdev/bdev_raid.sh@709 -- # killprocess 141423 00:25:12.339 01:07:46 -- common/autotest_common.sh@936 -- # '[' -z 141423 ']' 00:25:12.339 01:07:46 -- common/autotest_common.sh@940 -- # kill -0 141423 00:25:12.339 01:07:46 -- common/autotest_common.sh@941 -- # uname 00:25:12.339 01:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.339 01:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141423 00:25:12.339 01:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:12.339 01:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:12.339 01:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141423' 00:25:12.339 killing process with pid 141423 00:25:12.339 01:07:46 -- common/autotest_common.sh@955 -- # kill 141423 00:25:12.339 Received shutdown signal, test time was about 60.000000 seconds 00:25:12.339 00:25:12.339 Latency(us) 00:25:12.339 [2024-11-18T01:07:46.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.339 [2024-11-18T01:07:46.738Z] =================================================================================================================== 00:25:12.339 [2024-11-18T01:07:46.738Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:12.339 01:07:46 -- common/autotest_common.sh@960 -- # wait 141423 00:25:12.339 [2024-11-18 01:07:46.683170] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:12.598 [2024-11-18 01:07:46.770081] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:12.856 01:07:47 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:12.856 00:25:12.856 real 0m23.234s 00:25:12.856 user 0m33.000s 00:25:12.856 sys 0m3.620s 00:25:12.856 01:07:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:12.856 01:07:47 -- common/autotest_common.sh@10 -- # set +x 00:25:12.856 ************************************ 00:25:12.856 END TEST raid5f_rebuild_test 00:25:12.856 ************************************ 00:25:12.856 01:07:47 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:12.856 01:07:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:12.856 01:07:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:12.857 01:07:47 -- common/autotest_common.sh@10 -- # set +x 00:25:12.857 ************************************ 00:25:12.857 START TEST raid5f_rebuild_test_sb 00:25:12.857 ************************************ 00:25:12.857 01:07:47 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:12.857 01:07:47 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:13.116 01:07:47 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:13.116 01:07:47 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:13.116 01:07:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=142020 00:25:13.116 01:07:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:13.116 01:07:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142020 /var/tmp/spdk-raid.sock 00:25:13.116 01:07:47 -- common/autotest_common.sh@829 -- # '[' -z 142020 ']' 00:25:13.116 01:07:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:13.116 01:07:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.116 01:07:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:13.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:13.116 01:07:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.116 01:07:47 -- common/autotest_common.sh@10 -- # set +x 00:25:13.116 [2024-11-18 01:07:47.327218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:13.116 [2024-11-18 01:07:47.327765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142020 ] 00:25:13.116 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:13.116 Zero copy mechanism will not be used. 00:25:13.116 [2024-11-18 01:07:47.479581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.401 [2024-11-18 01:07:47.566054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.401 [2024-11-18 01:07:47.643989] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:14.010 01:07:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.010 01:07:48 -- common/autotest_common.sh@862 -- # return 0 00:25:14.010 01:07:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:14.010 01:07:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:14.010 01:07:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:14.269 BaseBdev1_malloc 00:25:14.270 01:07:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:14.529 [2024-11-18 01:07:48.732866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:14.529 [2024-11-18 01:07:48.733175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.529 [2024-11-18 01:07:48.733260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:25:14.529 [2024-11-18 01:07:48.733392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.529 [2024-11-18 01:07:48.736250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.529 [2024-11-18 01:07:48.736429] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:14.529 BaseBdev1 00:25:14.529 01:07:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:14.529 01:07:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:14.529 01:07:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:14.529 BaseBdev2_malloc 00:25:14.789 01:07:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:14.789 [2024-11-18 01:07:49.144505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:14.789 [2024-11-18 01:07:49.144776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.789 [2024-11-18 01:07:49.144854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:14.789 [2024-11-18 01:07:49.144973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.789 [2024-11-18 01:07:49.147685] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.789 [2024-11-18 01:07:49.147859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:14.789 BaseBdev2 00:25:14.789 01:07:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:14.789 01:07:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:14.789 01:07:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:15.048 BaseBdev3_malloc 00:25:15.306 01:07:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:15.306 [2024-11-18 01:07:49.618422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:15.306 [2024-11-18 01:07:49.618723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.306 [2024-11-18 01:07:49.618811] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:15.306 [2024-11-18 01:07:49.618937] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.306 [2024-11-18 01:07:49.621750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.306 [2024-11-18 01:07:49.621909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:15.306 BaseBdev3 00:25:15.306 01:07:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:15.306 01:07:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:15.306 01:07:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:15.566 BaseBdev4_malloc 00:25:15.566 01:07:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:15.825 [2024-11-18 01:07:50.062466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:15.825 [2024-11-18 01:07:50.062790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.825 [2024-11-18 01:07:50.062865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:15.825 [2024-11-18 01:07:50.062981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.825 [2024-11-18 01:07:50.065690] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.825 [2024-11-18 01:07:50.065894] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:15.825 BaseBdev4 00:25:15.825 01:07:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:16.084 spare_malloc 00:25:16.084 01:07:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:16.084 spare_delay 00:25:16.084 01:07:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:16.344 [2024-11-18 01:07:50.626314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:16.344 [2024-11-18 01:07:50.626583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.344 [2024-11-18 01:07:50.626657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:16.344 [2024-11-18 01:07:50.626775] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.344 [2024-11-18 01:07:50.629599] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.344 [2024-11-18 01:07:50.629751] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:16.344 spare 00:25:16.344 01:07:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:16.602 [2024-11-18 01:07:50.810610] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:16.602 [2024-11-18 01:07:50.813241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:16.602 [2024-11-18 01:07:50.813424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:16.602 [2024-11-18 01:07:50.813500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:16.602 [2024-11-18 01:07:50.813818] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:16.602 [2024-11-18 01:07:50.813905] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:16.602 [2024-11-18 01:07:50.814103] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:16.602 [2024-11-18 01:07:50.815010] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:16.602 [2024-11-18 01:07:50.815114] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:16.602 [2024-11-18 01:07:50.815401] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.602 01:07:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.861 01:07:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.861 "name": "raid_bdev1", 00:25:16.861 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:16.861 "strip_size_kb": 64, 00:25:16.861 "state": "online", 00:25:16.861 "raid_level": "raid5f", 00:25:16.861 "superblock": true, 00:25:16.861 "num_base_bdevs": 4, 00:25:16.861 "num_base_bdevs_discovered": 4, 00:25:16.861 "num_base_bdevs_operational": 4, 00:25:16.861 "base_bdevs_list": [ 00:25:16.861 { 00:25:16.861 "name": "BaseBdev1", 00:25:16.861 "uuid": "57062400-c407-58f1-ab74-7899e48054a3", 00:25:16.861 "is_configured": true, 00:25:16.861 "data_offset": 2048, 00:25:16.861 "data_size": 63488 00:25:16.861 }, 00:25:16.861 { 00:25:16.861 "name": "BaseBdev2", 00:25:16.861 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:16.861 "is_configured": true, 00:25:16.861 "data_offset": 2048, 00:25:16.861 "data_size": 63488 00:25:16.861 }, 00:25:16.861 { 00:25:16.861 "name": "BaseBdev3", 00:25:16.861 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:16.861 "is_configured": true, 00:25:16.861 "data_offset": 2048, 00:25:16.861 "data_size": 63488 00:25:16.861 }, 00:25:16.861 { 00:25:16.861 "name": "BaseBdev4", 00:25:16.861 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:16.861 "is_configured": true, 00:25:16.861 "data_offset": 2048, 00:25:16.861 "data_size": 63488 00:25:16.861 } 00:25:16.861 ] 00:25:16.861 }' 00:25:16.861 01:07:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.861 01:07:51 -- common/autotest_common.sh@10 -- # set +x 00:25:17.119 01:07:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:17.119 01:07:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:17.378 [2024-11-18 01:07:51.763666] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:17.636 01:07:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:25:17.636 01:07:51 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:17.636 01:07:51 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.636 01:07:51 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:17.636 01:07:51 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:17.636 01:07:51 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:17.636 01:07:51 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@12 -- # local i 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:17.636 01:07:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:17.895 [2024-11-18 01:07:52.139670] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:25:17.895 /dev/nbd0 00:25:17.895 01:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:17.895 01:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:17.895 01:07:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:17.895 01:07:52 -- common/autotest_common.sh@867 -- # local i 00:25:17.895 01:07:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:17.895 01:07:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:17.895 01:07:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:17.895 01:07:52 -- common/autotest_common.sh@871 -- # break 00:25:17.895 01:07:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:17.895 01:07:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:17.895 01:07:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:17.895 1+0 records in 00:25:17.895 1+0 records out 00:25:17.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219907 s, 18.6 MB/s 00:25:17.895 01:07:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.895 01:07:52 -- common/autotest_common.sh@884 -- # size=4096 00:25:17.895 01:07:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.895 01:07:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:17.895 01:07:52 -- common/autotest_common.sh@887 -- # return 0 00:25:17.895 01:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:17.895 01:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:17.895 01:07:52 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:17.895 01:07:52 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:17.895 01:07:52 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:17.895 01:07:52 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:18.462 496+0 records in 00:25:18.462 496+0 records out 00:25:18.462 97517568 bytes (98 MB, 93 MiB) copied, 0.410096 s, 238 MB/s 00:25:18.462 01:07:52 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:18.462 01:07:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:18.463 01:07:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:18.463 01:07:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:18.463 01:07:52 -- bdev/nbd_common.sh@51 -- # local i 00:25:18.463 01:07:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:18.463 01:07:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:18.722 [2024-11-18 01:07:52.875011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@41 -- # break 00:25:18.722 01:07:52 -- bdev/nbd_common.sh@45 -- # return 0 00:25:18.722 01:07:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:18.722 [2024-11-18 01:07:53.110606] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.981 "name": "raid_bdev1", 00:25:18.981 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:18.981 "strip_size_kb": 64, 00:25:18.981 "state": "online", 00:25:18.981 "raid_level": "raid5f", 00:25:18.981 "superblock": true, 00:25:18.981 "num_base_bdevs": 4, 00:25:18.981 "num_base_bdevs_discovered": 3, 00:25:18.981 "num_base_bdevs_operational": 3, 00:25:18.981 "base_bdevs_list": [ 00:25:18.981 { 00:25:18.981 "name": null, 00:25:18.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.981 "is_configured": false, 00:25:18.981 "data_offset": 2048, 00:25:18.981 "data_size": 63488 00:25:18.981 }, 00:25:18.981 { 00:25:18.981 "name": "BaseBdev2", 00:25:18.981 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:18.981 "is_configured": true, 00:25:18.981 "data_offset": 2048, 00:25:18.981 "data_size": 63488 00:25:18.981 }, 00:25:18.981 { 00:25:18.981 "name": "BaseBdev3", 00:25:18.981 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:18.981 "is_configured": true, 00:25:18.981 "data_offset": 2048, 00:25:18.981 "data_size": 63488 00:25:18.981 }, 00:25:18.981 { 00:25:18.981 "name": "BaseBdev4", 00:25:18.981 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:18.981 "is_configured": true, 00:25:18.981 "data_offset": 2048, 00:25:18.981 "data_size": 63488 00:25:18.981 } 00:25:18.981 ] 00:25:18.981 }' 00:25:18.981 01:07:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.981 01:07:53 -- common/autotest_common.sh@10 -- # set +x 00:25:19.548 01:07:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:19.807 [2024-11-18 01:07:53.994927] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:19.807 [2024-11-18 01:07:53.994987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:19.807 [2024-11-18 01:07:54.001023] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:25:19.807 [2024-11-18 01:07:54.003979] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:19.807 01:07:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:20.744 01:07:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.744 01:07:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:20.744 01:07:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:20.744 01:07:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:20.744 01:07:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:20.744 01:07:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.744 01:07:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.004 01:07:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:21.004 "name": "raid_bdev1", 00:25:21.004 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:21.004 "strip_size_kb": 64, 00:25:21.004 "state": "online", 00:25:21.004 "raid_level": "raid5f", 00:25:21.004 "superblock": true, 00:25:21.004 "num_base_bdevs": 4, 00:25:21.004 "num_base_bdevs_discovered": 4, 00:25:21.004 "num_base_bdevs_operational": 4, 00:25:21.004 "process": { 00:25:21.004 "type": "rebuild", 00:25:21.004 "target": "spare", 00:25:21.004 "progress": { 00:25:21.004 "blocks": 23040, 00:25:21.004 "percent": 12 00:25:21.004 } 00:25:21.004 }, 00:25:21.004 "base_bdevs_list": [ 00:25:21.004 { 00:25:21.004 "name": "spare", 00:25:21.004 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:21.004 "is_configured": true, 00:25:21.004 "data_offset": 2048, 00:25:21.004 "data_size": 63488 00:25:21.004 }, 00:25:21.004 { 00:25:21.004 "name": "BaseBdev2", 00:25:21.004 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:21.004 "is_configured": true, 00:25:21.004 "data_offset": 2048, 00:25:21.004 "data_size": 63488 00:25:21.004 }, 00:25:21.004 { 00:25:21.004 "name": "BaseBdev3", 00:25:21.004 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:21.004 "is_configured": true, 00:25:21.004 "data_offset": 2048, 00:25:21.004 "data_size": 63488 00:25:21.004 }, 00:25:21.004 { 00:25:21.004 "name": "BaseBdev4", 00:25:21.004 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:21.004 "is_configured": true, 00:25:21.004 "data_offset": 2048, 00:25:21.004 "data_size": 63488 00:25:21.004 } 00:25:21.004 ] 00:25:21.004 }' 00:25:21.004 01:07:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:21.004 01:07:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.004 01:07:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:21.263 01:07:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.263 01:07:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:21.263 [2024-11-18 01:07:55.649221] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:21.523 [2024-11-18 01:07:55.716841] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:21.523 [2024-11-18 01:07:55.716956] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.523 "name": "raid_bdev1", 00:25:21.523 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:21.523 "strip_size_kb": 64, 00:25:21.523 "state": "online", 00:25:21.523 "raid_level": "raid5f", 00:25:21.523 "superblock": true, 00:25:21.523 "num_base_bdevs": 4, 00:25:21.523 "num_base_bdevs_discovered": 3, 00:25:21.523 "num_base_bdevs_operational": 3, 00:25:21.523 "base_bdevs_list": [ 00:25:21.523 { 00:25:21.523 "name": null, 00:25:21.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.523 "is_configured": false, 00:25:21.523 "data_offset": 2048, 00:25:21.523 "data_size": 63488 00:25:21.523 }, 00:25:21.523 { 00:25:21.523 "name": "BaseBdev2", 00:25:21.523 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:21.523 "is_configured": true, 00:25:21.523 "data_offset": 2048, 00:25:21.523 "data_size": 63488 00:25:21.523 }, 00:25:21.523 { 00:25:21.523 "name": "BaseBdev3", 00:25:21.523 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:21.523 "is_configured": true, 00:25:21.523 "data_offset": 2048, 00:25:21.523 "data_size": 63488 00:25:21.523 }, 00:25:21.523 { 00:25:21.523 "name": "BaseBdev4", 00:25:21.523 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:21.523 "is_configured": true, 00:25:21.523 "data_offset": 2048, 00:25:21.523 "data_size": 63488 00:25:21.523 } 00:25:21.523 ] 00:25:21.523 }' 00:25:21.523 01:07:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.523 01:07:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.091 01:07:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.091 01:07:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:22.091 01:07:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:22.091 01:07:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:22.091 01:07:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:22.091 01:07:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.091 01:07:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.350 01:07:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:22.350 "name": "raid_bdev1", 00:25:22.350 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:22.350 "strip_size_kb": 64, 00:25:22.350 "state": "online", 00:25:22.350 "raid_level": "raid5f", 00:25:22.350 "superblock": true, 00:25:22.350 "num_base_bdevs": 4, 00:25:22.350 "num_base_bdevs_discovered": 3, 00:25:22.350 "num_base_bdevs_operational": 3, 00:25:22.350 "base_bdevs_list": [ 00:25:22.350 { 00:25:22.350 "name": null, 00:25:22.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.350 "is_configured": false, 00:25:22.350 "data_offset": 2048, 00:25:22.350 "data_size": 63488 00:25:22.350 }, 00:25:22.350 { 00:25:22.350 "name": "BaseBdev2", 00:25:22.350 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:22.350 "is_configured": true, 00:25:22.350 "data_offset": 2048, 00:25:22.350 "data_size": 63488 00:25:22.350 }, 00:25:22.350 { 00:25:22.350 "name": "BaseBdev3", 00:25:22.350 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:22.350 "is_configured": true, 00:25:22.350 "data_offset": 2048, 00:25:22.350 "data_size": 63488 00:25:22.350 }, 00:25:22.350 { 00:25:22.350 "name": "BaseBdev4", 00:25:22.350 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:22.350 "is_configured": true, 00:25:22.350 "data_offset": 2048, 00:25:22.350 "data_size": 63488 00:25:22.350 } 00:25:22.350 ] 00:25:22.350 }' 00:25:22.350 01:07:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:22.609 01:07:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:22.609 01:07:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:22.609 01:07:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:22.609 01:07:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:22.868 [2024-11-18 01:07:57.082765] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:22.868 [2024-11-18 01:07:57.082817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:22.868 [2024-11-18 01:07:57.088825] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240 00:25:22.868 [2024-11-18 01:07:57.091620] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:22.868 01:07:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:23.807 01:07:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.807 01:07:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:23.807 01:07:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:23.807 01:07:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:23.807 01:07:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:23.807 01:07:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.807 01:07:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:24.066 "name": "raid_bdev1", 00:25:24.066 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:24.066 "strip_size_kb": 64, 00:25:24.066 "state": "online", 00:25:24.066 "raid_level": "raid5f", 00:25:24.066 "superblock": true, 00:25:24.066 "num_base_bdevs": 4, 00:25:24.066 "num_base_bdevs_discovered": 4, 00:25:24.066 "num_base_bdevs_operational": 4, 00:25:24.066 "process": { 00:25:24.066 "type": "rebuild", 00:25:24.066 "target": "spare", 00:25:24.066 "progress": { 00:25:24.066 "blocks": 23040, 00:25:24.066 "percent": 12 00:25:24.066 } 00:25:24.066 }, 00:25:24.066 "base_bdevs_list": [ 00:25:24.066 { 00:25:24.066 "name": "spare", 00:25:24.066 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:24.066 "is_configured": true, 00:25:24.066 "data_offset": 2048, 00:25:24.066 "data_size": 63488 00:25:24.066 }, 00:25:24.066 { 00:25:24.066 "name": "BaseBdev2", 00:25:24.066 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:24.066 "is_configured": true, 00:25:24.066 "data_offset": 2048, 00:25:24.066 "data_size": 63488 00:25:24.066 }, 00:25:24.066 { 00:25:24.066 "name": "BaseBdev3", 00:25:24.066 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:24.066 "is_configured": true, 00:25:24.066 "data_offset": 2048, 00:25:24.066 "data_size": 63488 00:25:24.066 }, 00:25:24.066 { 00:25:24.066 "name": "BaseBdev4", 00:25:24.066 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:24.066 "is_configured": true, 00:25:24.066 "data_offset": 2048, 00:25:24.066 "data_size": 63488 00:25:24.066 } 00:25:24.066 ] 00:25:24.066 }' 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:24.066 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@657 -- # local timeout=681 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.066 01:07:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.326 01:07:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:24.327 "name": "raid_bdev1", 00:25:24.327 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:24.327 "strip_size_kb": 64, 00:25:24.327 "state": "online", 00:25:24.327 "raid_level": "raid5f", 00:25:24.327 "superblock": true, 00:25:24.327 "num_base_bdevs": 4, 00:25:24.327 "num_base_bdevs_discovered": 4, 00:25:24.327 "num_base_bdevs_operational": 4, 00:25:24.327 "process": { 00:25:24.327 "type": "rebuild", 00:25:24.327 "target": "spare", 00:25:24.327 "progress": { 00:25:24.327 "blocks": 28800, 00:25:24.327 "percent": 15 00:25:24.327 } 00:25:24.327 }, 00:25:24.327 "base_bdevs_list": [ 00:25:24.327 { 00:25:24.327 "name": "spare", 00:25:24.327 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:24.327 "is_configured": true, 00:25:24.327 "data_offset": 2048, 00:25:24.327 "data_size": 63488 00:25:24.327 }, 00:25:24.327 { 00:25:24.327 "name": "BaseBdev2", 00:25:24.327 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:24.327 "is_configured": true, 00:25:24.327 "data_offset": 2048, 00:25:24.327 "data_size": 63488 00:25:24.327 }, 00:25:24.327 { 00:25:24.327 "name": "BaseBdev3", 00:25:24.327 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:24.327 "is_configured": true, 00:25:24.327 "data_offset": 2048, 00:25:24.327 "data_size": 63488 00:25:24.327 }, 00:25:24.327 { 00:25:24.327 "name": "BaseBdev4", 00:25:24.327 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:24.327 "is_configured": true, 00:25:24.327 "data_offset": 2048, 00:25:24.327 "data_size": 63488 00:25:24.327 } 00:25:24.327 ] 00:25:24.327 }' 00:25:24.327 01:07:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:24.327 01:07:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:24.327 01:07:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:24.327 01:07:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:24.327 01:07:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:25.706 "name": "raid_bdev1", 00:25:25.706 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:25.706 "strip_size_kb": 64, 00:25:25.706 "state": "online", 00:25:25.706 "raid_level": "raid5f", 00:25:25.706 "superblock": true, 00:25:25.706 "num_base_bdevs": 4, 00:25:25.706 "num_base_bdevs_discovered": 4, 00:25:25.706 "num_base_bdevs_operational": 4, 00:25:25.706 "process": { 00:25:25.706 "type": "rebuild", 00:25:25.706 "target": "spare", 00:25:25.706 "progress": { 00:25:25.706 "blocks": 53760, 00:25:25.706 "percent": 28 00:25:25.706 } 00:25:25.706 }, 00:25:25.706 "base_bdevs_list": [ 00:25:25.706 { 00:25:25.706 "name": "spare", 00:25:25.706 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:25.706 "is_configured": true, 00:25:25.706 "data_offset": 2048, 00:25:25.706 "data_size": 63488 00:25:25.706 }, 00:25:25.706 { 00:25:25.706 "name": "BaseBdev2", 00:25:25.706 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:25.706 "is_configured": true, 00:25:25.706 "data_offset": 2048, 00:25:25.706 "data_size": 63488 00:25:25.706 }, 00:25:25.706 { 00:25:25.706 "name": "BaseBdev3", 00:25:25.706 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:25.706 "is_configured": true, 00:25:25.706 "data_offset": 2048, 00:25:25.706 "data_size": 63488 00:25:25.706 }, 00:25:25.706 { 00:25:25.706 "name": "BaseBdev4", 00:25:25.706 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:25.706 "is_configured": true, 00:25:25.706 "data_offset": 2048, 00:25:25.706 "data_size": 63488 00:25:25.706 } 00:25:25.706 ] 00:25:25.706 }' 00:25:25.706 01:07:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:25.706 01:08:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.706 01:08:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:25.706 01:08:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.706 01:08:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:27.086 "name": "raid_bdev1", 00:25:27.086 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:27.086 "strip_size_kb": 64, 00:25:27.086 "state": "online", 00:25:27.086 "raid_level": "raid5f", 00:25:27.086 "superblock": true, 00:25:27.086 "num_base_bdevs": 4, 00:25:27.086 "num_base_bdevs_discovered": 4, 00:25:27.086 "num_base_bdevs_operational": 4, 00:25:27.086 "process": { 00:25:27.086 "type": "rebuild", 00:25:27.086 "target": "spare", 00:25:27.086 "progress": { 00:25:27.086 "blocks": 78720, 00:25:27.086 "percent": 41 00:25:27.086 } 00:25:27.086 }, 00:25:27.086 "base_bdevs_list": [ 00:25:27.086 { 00:25:27.086 "name": "spare", 00:25:27.086 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:27.086 "is_configured": true, 00:25:27.086 "data_offset": 2048, 00:25:27.086 "data_size": 63488 00:25:27.086 }, 00:25:27.086 { 00:25:27.086 "name": "BaseBdev2", 00:25:27.086 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:27.086 "is_configured": true, 00:25:27.086 "data_offset": 2048, 00:25:27.086 "data_size": 63488 00:25:27.086 }, 00:25:27.086 { 00:25:27.086 "name": "BaseBdev3", 00:25:27.086 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:27.086 "is_configured": true, 00:25:27.086 "data_offset": 2048, 00:25:27.086 "data_size": 63488 00:25:27.086 }, 00:25:27.086 { 00:25:27.086 "name": "BaseBdev4", 00:25:27.086 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:27.086 "is_configured": true, 00:25:27.086 "data_offset": 2048, 00:25:27.086 "data_size": 63488 00:25:27.086 } 00:25:27.086 ] 00:25:27.086 }' 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:27.086 01:08:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.023 01:08:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.282 01:08:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:28.282 "name": "raid_bdev1", 00:25:28.282 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:28.282 "strip_size_kb": 64, 00:25:28.282 "state": "online", 00:25:28.282 "raid_level": "raid5f", 00:25:28.282 "superblock": true, 00:25:28.282 "num_base_bdevs": 4, 00:25:28.282 "num_base_bdevs_discovered": 4, 00:25:28.282 "num_base_bdevs_operational": 4, 00:25:28.282 "process": { 00:25:28.282 "type": "rebuild", 00:25:28.282 "target": "spare", 00:25:28.282 "progress": { 00:25:28.282 "blocks": 105600, 00:25:28.282 "percent": 55 00:25:28.282 } 00:25:28.282 }, 00:25:28.282 "base_bdevs_list": [ 00:25:28.282 { 00:25:28.282 "name": "spare", 00:25:28.282 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:28.282 "is_configured": true, 00:25:28.282 "data_offset": 2048, 00:25:28.282 "data_size": 63488 00:25:28.282 }, 00:25:28.282 { 00:25:28.282 "name": "BaseBdev2", 00:25:28.282 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:28.282 "is_configured": true, 00:25:28.282 "data_offset": 2048, 00:25:28.282 "data_size": 63488 00:25:28.282 }, 00:25:28.282 { 00:25:28.283 "name": "BaseBdev3", 00:25:28.283 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:28.283 "is_configured": true, 00:25:28.283 "data_offset": 2048, 00:25:28.283 "data_size": 63488 00:25:28.283 }, 00:25:28.283 { 00:25:28.283 "name": "BaseBdev4", 00:25:28.283 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:28.283 "is_configured": true, 00:25:28.283 "data_offset": 2048, 00:25:28.283 "data_size": 63488 00:25:28.283 } 00:25:28.283 ] 00:25:28.283 }' 00:25:28.283 01:08:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:28.283 01:08:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.542 01:08:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:28.542 01:08:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.542 01:08:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.480 01:08:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.739 01:08:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:29.739 "name": "raid_bdev1", 00:25:29.739 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:29.739 "strip_size_kb": 64, 00:25:29.739 "state": "online", 00:25:29.739 "raid_level": "raid5f", 00:25:29.739 "superblock": true, 00:25:29.739 "num_base_bdevs": 4, 00:25:29.739 "num_base_bdevs_discovered": 4, 00:25:29.739 "num_base_bdevs_operational": 4, 00:25:29.739 "process": { 00:25:29.739 "type": "rebuild", 00:25:29.739 "target": "spare", 00:25:29.739 "progress": { 00:25:29.739 "blocks": 130560, 00:25:29.739 "percent": 68 00:25:29.739 } 00:25:29.739 }, 00:25:29.739 "base_bdevs_list": [ 00:25:29.739 { 00:25:29.739 "name": "spare", 00:25:29.739 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:29.739 "is_configured": true, 00:25:29.739 "data_offset": 2048, 00:25:29.739 "data_size": 63488 00:25:29.739 }, 00:25:29.739 { 00:25:29.739 "name": "BaseBdev2", 00:25:29.739 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:29.739 "is_configured": true, 00:25:29.739 "data_offset": 2048, 00:25:29.739 "data_size": 63488 00:25:29.739 }, 00:25:29.739 { 00:25:29.739 "name": "BaseBdev3", 00:25:29.739 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:29.739 "is_configured": true, 00:25:29.739 "data_offset": 2048, 00:25:29.739 "data_size": 63488 00:25:29.739 }, 00:25:29.739 { 00:25:29.739 "name": "BaseBdev4", 00:25:29.739 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:29.739 "is_configured": true, 00:25:29.739 "data_offset": 2048, 00:25:29.739 "data_size": 63488 00:25:29.739 } 00:25:29.739 ] 00:25:29.739 }' 00:25:29.739 01:08:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:29.739 01:08:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:29.739 01:08:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:29.739 01:08:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:29.739 01:08:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:31.118 "name": "raid_bdev1", 00:25:31.118 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:31.118 "strip_size_kb": 64, 00:25:31.118 "state": "online", 00:25:31.118 "raid_level": "raid5f", 00:25:31.118 "superblock": true, 00:25:31.118 "num_base_bdevs": 4, 00:25:31.118 "num_base_bdevs_discovered": 4, 00:25:31.118 "num_base_bdevs_operational": 4, 00:25:31.118 "process": { 00:25:31.118 "type": "rebuild", 00:25:31.118 "target": "spare", 00:25:31.118 "progress": { 00:25:31.118 "blocks": 155520, 00:25:31.118 "percent": 81 00:25:31.118 } 00:25:31.118 }, 00:25:31.118 "base_bdevs_list": [ 00:25:31.118 { 00:25:31.118 "name": "spare", 00:25:31.118 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:31.118 "is_configured": true, 00:25:31.118 "data_offset": 2048, 00:25:31.118 "data_size": 63488 00:25:31.118 }, 00:25:31.118 { 00:25:31.118 "name": "BaseBdev2", 00:25:31.118 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:31.118 "is_configured": true, 00:25:31.118 "data_offset": 2048, 00:25:31.118 "data_size": 63488 00:25:31.118 }, 00:25:31.118 { 00:25:31.118 "name": "BaseBdev3", 00:25:31.118 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:31.118 "is_configured": true, 00:25:31.118 "data_offset": 2048, 00:25:31.118 "data_size": 63488 00:25:31.118 }, 00:25:31.118 { 00:25:31.118 "name": "BaseBdev4", 00:25:31.118 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:31.118 "is_configured": true, 00:25:31.118 "data_offset": 2048, 00:25:31.118 "data_size": 63488 00:25:31.118 } 00:25:31.118 ] 00:25:31.118 }' 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.118 01:08:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.054 01:08:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.313 01:08:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:32.313 "name": "raid_bdev1", 00:25:32.313 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:32.313 "strip_size_kb": 64, 00:25:32.313 "state": "online", 00:25:32.313 "raid_level": "raid5f", 00:25:32.313 "superblock": true, 00:25:32.313 "num_base_bdevs": 4, 00:25:32.313 "num_base_bdevs_discovered": 4, 00:25:32.313 "num_base_bdevs_operational": 4, 00:25:32.313 "process": { 00:25:32.313 "type": "rebuild", 00:25:32.313 "target": "spare", 00:25:32.313 "progress": { 00:25:32.313 "blocks": 180480, 00:25:32.313 "percent": 94 00:25:32.313 } 00:25:32.313 }, 00:25:32.313 "base_bdevs_list": [ 00:25:32.313 { 00:25:32.313 "name": "spare", 00:25:32.313 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:32.313 "is_configured": true, 00:25:32.313 "data_offset": 2048, 00:25:32.313 "data_size": 63488 00:25:32.313 }, 00:25:32.313 { 00:25:32.313 "name": "BaseBdev2", 00:25:32.313 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:32.313 "is_configured": true, 00:25:32.313 "data_offset": 2048, 00:25:32.313 "data_size": 63488 00:25:32.313 }, 00:25:32.313 { 00:25:32.313 "name": "BaseBdev3", 00:25:32.313 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:32.313 "is_configured": true, 00:25:32.313 "data_offset": 2048, 00:25:32.313 "data_size": 63488 00:25:32.313 }, 00:25:32.313 { 00:25:32.313 "name": "BaseBdev4", 00:25:32.313 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:32.313 "is_configured": true, 00:25:32.313 "data_offset": 2048, 00:25:32.313 "data_size": 63488 00:25:32.313 } 00:25:32.313 ] 00:25:32.313 }' 00:25:32.313 01:08:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:32.313 01:08:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.313 01:08:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:32.572 01:08:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.572 01:08:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:32.832 [2024-11-18 01:08:07.162467] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:32.832 [2024-11-18 01:08:07.162582] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:32.832 [2024-11-18 01:08:07.162746] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.400 01:08:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.660 01:08:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.660 "name": "raid_bdev1", 00:25:33.660 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:33.660 "strip_size_kb": 64, 00:25:33.660 "state": "online", 00:25:33.660 "raid_level": "raid5f", 00:25:33.660 "superblock": true, 00:25:33.660 "num_base_bdevs": 4, 00:25:33.660 "num_base_bdevs_discovered": 4, 00:25:33.660 "num_base_bdevs_operational": 4, 00:25:33.660 "base_bdevs_list": [ 00:25:33.660 { 00:25:33.660 "name": "spare", 00:25:33.660 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:33.660 "is_configured": true, 00:25:33.660 "data_offset": 2048, 00:25:33.660 "data_size": 63488 00:25:33.660 }, 00:25:33.660 { 00:25:33.660 "name": "BaseBdev2", 00:25:33.660 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:33.660 "is_configured": true, 00:25:33.660 "data_offset": 2048, 00:25:33.660 "data_size": 63488 00:25:33.660 }, 00:25:33.660 { 00:25:33.660 "name": "BaseBdev3", 00:25:33.660 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:33.660 "is_configured": true, 00:25:33.660 "data_offset": 2048, 00:25:33.660 "data_size": 63488 00:25:33.660 }, 00:25:33.660 { 00:25:33.660 "name": "BaseBdev4", 00:25:33.660 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:33.660 "is_configured": true, 00:25:33.660 "data_offset": 2048, 00:25:33.660 "data_size": 63488 00:25:33.660 } 00:25:33.660 ] 00:25:33.660 }' 00:25:33.660 01:08:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@660 -- # break 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.660 01:08:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.919 01:08:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.919 "name": "raid_bdev1", 00:25:33.919 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:33.919 "strip_size_kb": 64, 00:25:33.919 "state": "online", 00:25:33.919 "raid_level": "raid5f", 00:25:33.919 "superblock": true, 00:25:33.919 "num_base_bdevs": 4, 00:25:33.919 "num_base_bdevs_discovered": 4, 00:25:33.919 "num_base_bdevs_operational": 4, 00:25:33.919 "base_bdevs_list": [ 00:25:33.919 { 00:25:33.919 "name": "spare", 00:25:33.919 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:33.919 "is_configured": true, 00:25:33.919 "data_offset": 2048, 00:25:33.919 "data_size": 63488 00:25:33.919 }, 00:25:33.919 { 00:25:33.919 "name": "BaseBdev2", 00:25:33.919 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:33.919 "is_configured": true, 00:25:33.919 "data_offset": 2048, 00:25:33.919 "data_size": 63488 00:25:33.919 }, 00:25:33.919 { 00:25:33.919 "name": "BaseBdev3", 00:25:33.919 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:33.919 "is_configured": true, 00:25:33.919 "data_offset": 2048, 00:25:33.919 "data_size": 63488 00:25:33.919 }, 00:25:33.919 { 00:25:33.919 "name": "BaseBdev4", 00:25:33.919 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:33.919 "is_configured": true, 00:25:33.919 "data_offset": 2048, 00:25:33.919 "data_size": 63488 00:25:33.919 } 00:25:33.919 ] 00:25:33.919 }' 00:25:33.919 01:08:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:34.178 01:08:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:34.179 01:08:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:34.179 01:08:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.179 01:08:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.179 01:08:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:34.179 "name": "raid_bdev1", 00:25:34.179 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:34.179 "strip_size_kb": 64, 00:25:34.179 "state": "online", 00:25:34.179 "raid_level": "raid5f", 00:25:34.179 "superblock": true, 00:25:34.179 "num_base_bdevs": 4, 00:25:34.179 "num_base_bdevs_discovered": 4, 00:25:34.179 "num_base_bdevs_operational": 4, 00:25:34.179 "base_bdevs_list": [ 00:25:34.179 { 00:25:34.179 "name": "spare", 00:25:34.179 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:34.179 "is_configured": true, 00:25:34.179 "data_offset": 2048, 00:25:34.179 "data_size": 63488 00:25:34.179 }, 00:25:34.179 { 00:25:34.179 "name": "BaseBdev2", 00:25:34.179 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:34.179 "is_configured": true, 00:25:34.179 "data_offset": 2048, 00:25:34.179 "data_size": 63488 00:25:34.179 }, 00:25:34.179 { 00:25:34.179 "name": "BaseBdev3", 00:25:34.179 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:34.179 "is_configured": true, 00:25:34.179 "data_offset": 2048, 00:25:34.179 "data_size": 63488 00:25:34.179 }, 00:25:34.179 { 00:25:34.179 "name": "BaseBdev4", 00:25:34.179 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:34.179 "is_configured": true, 00:25:34.179 "data_offset": 2048, 00:25:34.179 "data_size": 63488 00:25:34.179 } 00:25:34.179 ] 00:25:34.179 }' 00:25:34.179 01:08:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:34.179 01:08:08 -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 01:08:09 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:35.116 [2024-11-18 01:08:09.426170] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.116 [2024-11-18 01:08:09.426210] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.116 [2024-11-18 01:08:09.426346] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.116 [2024-11-18 01:08:09.426463] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.116 [2024-11-18 01:08:09.426473] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:35.116 01:08:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:35.116 01:08:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.375 01:08:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:35.375 01:08:09 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:35.375 01:08:09 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@12 -- # local i 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:35.375 01:08:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:35.634 /dev/nbd0 00:25:35.634 01:08:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:35.634 01:08:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:35.634 01:08:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:35.634 01:08:09 -- common/autotest_common.sh@867 -- # local i 00:25:35.634 01:08:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:35.634 01:08:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:35.634 01:08:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:35.634 01:08:09 -- common/autotest_common.sh@871 -- # break 00:25:35.634 01:08:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:35.634 01:08:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:35.634 01:08:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.634 1+0 records in 00:25:35.634 1+0 records out 00:25:35.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306192 s, 13.4 MB/s 00:25:35.634 01:08:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.634 01:08:09 -- common/autotest_common.sh@884 -- # size=4096 00:25:35.634 01:08:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.634 01:08:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:35.634 01:08:09 -- common/autotest_common.sh@887 -- # return 0 00:25:35.634 01:08:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.634 01:08:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:35.634 01:08:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:35.893 /dev/nbd1 00:25:35.893 01:08:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:35.893 01:08:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:35.893 01:08:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:35.893 01:08:10 -- common/autotest_common.sh@867 -- # local i 00:25:35.893 01:08:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:35.893 01:08:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:35.893 01:08:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:35.893 01:08:10 -- common/autotest_common.sh@871 -- # break 00:25:35.893 01:08:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:35.893 01:08:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:35.893 01:08:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.893 1+0 records in 00:25:35.893 1+0 records out 00:25:35.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284724 s, 14.4 MB/s 00:25:35.893 01:08:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.893 01:08:10 -- common/autotest_common.sh@884 -- # size=4096 00:25:35.893 01:08:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.893 01:08:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:35.893 01:08:10 -- common/autotest_common.sh@887 -- # return 0 00:25:35.893 01:08:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.893 01:08:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:35.894 01:08:10 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:35.894 01:08:10 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:35.894 01:08:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:35.894 01:08:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:35.894 01:08:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:35.894 01:08:10 -- bdev/nbd_common.sh@51 -- # local i 00:25:35.894 01:08:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:35.894 01:08:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@41 -- # break 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@45 -- # return 0 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:36.153 01:08:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@41 -- # break 00:25:36.412 01:08:10 -- bdev/nbd_common.sh@45 -- # return 0 00:25:36.412 01:08:10 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:36.412 01:08:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:36.412 01:08:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:36.412 01:08:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:36.671 01:08:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:36.931 [2024-11-18 01:08:11.159065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:36.931 [2024-11-18 01:08:11.159153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.931 [2024-11-18 01:08:11.159199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:36.931 [2024-11-18 01:08:11.159223] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.931 [2024-11-18 01:08:11.162053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.931 [2024-11-18 01:08:11.162117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:36.931 [2024-11-18 01:08:11.162242] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:36.931 [2024-11-18 01:08:11.162312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.931 BaseBdev1 00:25:36.931 01:08:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:36.931 01:08:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:25:36.931 01:08:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:37.191 01:08:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:37.191 [2024-11-18 01:08:11.583155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:37.191 [2024-11-18 01:08:11.583234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.191 [2024-11-18 01:08:11.583277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:37.191 [2024-11-18 01:08:11.583301] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.191 [2024-11-18 01:08:11.583736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.191 [2024-11-18 01:08:11.583783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:37.191 [2024-11-18 01:08:11.583864] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:37.191 [2024-11-18 01:08:11.583875] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:37.191 [2024-11-18 01:08:11.583883] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.191 [2024-11-18 01:08:11.583921] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:25:37.191 [2024-11-18 01:08:11.583969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:37.191 BaseBdev2 00:25:37.450 01:08:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:37.450 01:08:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:37.450 01:08:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:37.450 01:08:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:37.709 [2024-11-18 01:08:11.999246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:37.709 [2024-11-18 01:08:11.999365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.709 [2024-11-18 01:08:11.999401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:37.709 [2024-11-18 01:08:11.999428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.709 [2024-11-18 01:08:11.999877] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.709 [2024-11-18 01:08:11.999928] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:37.709 [2024-11-18 01:08:12.000010] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:37.709 [2024-11-18 01:08:12.000031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:37.709 BaseBdev3 00:25:37.709 01:08:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:37.709 01:08:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:37.709 01:08:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:37.968 01:08:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:38.249 [2024-11-18 01:08:12.487415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:38.249 [2024-11-18 01:08:12.487538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.249 [2024-11-18 01:08:12.487576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:38.249 [2024-11-18 01:08:12.487621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.249 [2024-11-18 01:08:12.488075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.249 [2024-11-18 01:08:12.488123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:38.249 [2024-11-18 01:08:12.488209] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:38.249 [2024-11-18 01:08:12.488232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:38.249 BaseBdev4 00:25:38.249 01:08:12 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:38.557 [2024-11-18 01:08:12.891405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:38.557 [2024-11-18 01:08:12.891515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.557 [2024-11-18 01:08:12.891550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:38.557 [2024-11-18 01:08:12.891582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.557 [2024-11-18 01:08:12.892074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.557 [2024-11-18 01:08:12.892124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:38.557 [2024-11-18 01:08:12.892221] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:38.557 [2024-11-18 01:08:12.892269] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:38.557 spare 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.557 01:08:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.829 [2024-11-18 01:08:12.992390] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:25:38.829 [2024-11-18 01:08:12.992419] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:38.829 [2024-11-18 01:08:12.992609] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045ea0 00:25:38.829 [2024-11-18 01:08:12.993508] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:25:38.829 [2024-11-18 01:08:12.993530] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:25:38.829 [2024-11-18 01:08:12.993697] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.829 01:08:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:38.829 "name": "raid_bdev1", 00:25:38.829 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:38.829 "strip_size_kb": 64, 00:25:38.829 "state": "online", 00:25:38.829 "raid_level": "raid5f", 00:25:38.829 "superblock": true, 00:25:38.829 "num_base_bdevs": 4, 00:25:38.829 "num_base_bdevs_discovered": 4, 00:25:38.829 "num_base_bdevs_operational": 4, 00:25:38.829 "base_bdevs_list": [ 00:25:38.829 { 00:25:38.829 "name": "spare", 00:25:38.829 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:38.829 "is_configured": true, 00:25:38.829 "data_offset": 2048, 00:25:38.829 "data_size": 63488 00:25:38.829 }, 00:25:38.829 { 00:25:38.829 "name": "BaseBdev2", 00:25:38.829 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:38.829 "is_configured": true, 00:25:38.829 "data_offset": 2048, 00:25:38.829 "data_size": 63488 00:25:38.829 }, 00:25:38.829 { 00:25:38.829 "name": "BaseBdev3", 00:25:38.829 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:38.829 "is_configured": true, 00:25:38.829 "data_offset": 2048, 00:25:38.829 "data_size": 63488 00:25:38.829 }, 00:25:38.829 { 00:25:38.829 "name": "BaseBdev4", 00:25:38.829 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:38.829 "is_configured": true, 00:25:38.829 "data_offset": 2048, 00:25:38.829 "data_size": 63488 00:25:38.829 } 00:25:38.829 ] 00:25:38.829 }' 00:25:38.829 01:08:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:38.829 01:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:39.396 01:08:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:39.396 01:08:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.396 01:08:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:39.396 01:08:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:39.396 01:08:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.396 01:08:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.396 01:08:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.656 01:08:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.656 "name": "raid_bdev1", 00:25:39.656 "uuid": "bf9a53d1-2be8-4298-afa1-f587a3d31e8f", 00:25:39.656 "strip_size_kb": 64, 00:25:39.656 "state": "online", 00:25:39.656 "raid_level": "raid5f", 00:25:39.656 "superblock": true, 00:25:39.656 "num_base_bdevs": 4, 00:25:39.656 "num_base_bdevs_discovered": 4, 00:25:39.656 "num_base_bdevs_operational": 4, 00:25:39.656 "base_bdevs_list": [ 00:25:39.656 { 00:25:39.656 "name": "spare", 00:25:39.656 "uuid": "13c4ac96-fbfd-5bf1-baa6-e9cdde92a9b7", 00:25:39.656 "is_configured": true, 00:25:39.656 "data_offset": 2048, 00:25:39.656 "data_size": 63488 00:25:39.656 }, 00:25:39.656 { 00:25:39.656 "name": "BaseBdev2", 00:25:39.656 "uuid": "54fdaaf9-a45f-541e-8743-2d550ad2353c", 00:25:39.656 "is_configured": true, 00:25:39.656 "data_offset": 2048, 00:25:39.656 "data_size": 63488 00:25:39.656 }, 00:25:39.656 { 00:25:39.656 "name": "BaseBdev3", 00:25:39.656 "uuid": "399d4bd7-f319-5bff-b3c4-c185e5547da5", 00:25:39.656 "is_configured": true, 00:25:39.656 "data_offset": 2048, 00:25:39.656 "data_size": 63488 00:25:39.656 }, 00:25:39.656 { 00:25:39.656 "name": "BaseBdev4", 00:25:39.656 "uuid": "ee54cdfd-d43b-5bde-8b38-e71acf9427f0", 00:25:39.656 "is_configured": true, 00:25:39.656 "data_offset": 2048, 00:25:39.656 "data_size": 63488 00:25:39.656 } 00:25:39.656 ] 00:25:39.656 }' 00:25:39.656 01:08:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.656 01:08:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:39.656 01:08:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.656 01:08:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:39.656 01:08:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.656 01:08:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:39.915 01:08:14 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.915 01:08:14 -- bdev/bdev_raid.sh@709 -- # killprocess 142020 00:25:39.915 01:08:14 -- common/autotest_common.sh@936 -- # '[' -z 142020 ']' 00:25:39.915 01:08:14 -- common/autotest_common.sh@940 -- # kill -0 142020 00:25:39.915 01:08:14 -- common/autotest_common.sh@941 -- # uname 00:25:39.915 01:08:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:39.915 01:08:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142020 00:25:39.915 01:08:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:39.915 01:08:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:39.915 killing process with pid 142020 00:25:39.915 01:08:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142020' 00:25:39.915 01:08:14 -- common/autotest_common.sh@955 -- # kill 142020 00:25:39.915 Received shutdown signal, test time was about 60.000000 seconds 00:25:39.915 00:25:39.915 Latency(us) 00:25:39.915 [2024-11-18T01:08:14.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.915 [2024-11-18T01:08:14.314Z] =================================================================================================================== 00:25:39.915 [2024-11-18T01:08:14.314Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:39.915 [2024-11-18 01:08:14.235581] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:39.915 [2024-11-18 01:08:14.235679] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:39.915 [2024-11-18 01:08:14.235772] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:39.915 [2024-11-18 01:08:14.235781] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:25:39.915 01:08:14 -- common/autotest_common.sh@960 -- # wait 142020 00:25:40.174 [2024-11-18 01:08:14.323481] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:40.434 01:08:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:40.434 00:25:40.434 real 0m27.480s 00:25:40.434 user 0m41.004s 00:25:40.434 sys 0m4.125s 00:25:40.434 01:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:40.434 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 ************************************ 00:25:40.434 END TEST raid5f_rebuild_test_sb 00:25:40.434 ************************************ 00:25:40.434 01:08:14 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:25:40.434 00:25:40.434 real 11m7.440s 00:25:40.434 user 18m9.638s 00:25:40.434 sys 2m4.724s 00:25:40.434 01:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:40.434 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 ************************************ 00:25:40.434 END TEST bdev_raid 00:25:40.434 ************************************ 00:25:40.694 01:08:14 -- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:40.694 01:08:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:40.694 01:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:40.694 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:40.694 ************************************ 00:25:40.694 START TEST bdevperf_config 00:25:40.694 ************************************ 00:25:40.694 01:08:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:40.694 * Looking for test storage... 00:25:40.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:25:40.694 01:08:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:40.694 01:08:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:40.694 01:08:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:40.694 01:08:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:40.694 01:08:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:40.694 01:08:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:40.694 01:08:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:40.694 01:08:15 -- scripts/common.sh@335 -- # IFS=.-: 00:25:40.694 01:08:15 -- scripts/common.sh@335 -- # read -ra ver1 00:25:40.694 01:08:15 -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.694 01:08:15 -- scripts/common.sh@336 -- # read -ra ver2 00:25:40.695 01:08:15 -- scripts/common.sh@337 -- # local 'op=<' 00:25:40.695 01:08:15 -- scripts/common.sh@339 -- # ver1_l=2 00:25:40.695 01:08:15 -- scripts/common.sh@340 -- # ver2_l=1 00:25:40.695 01:08:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:40.695 01:08:15 -- scripts/common.sh@343 -- # case "$op" in 00:25:40.695 01:08:15 -- scripts/common.sh@344 -- # : 1 00:25:40.695 01:08:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:40.695 01:08:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.695 01:08:15 -- scripts/common.sh@364 -- # decimal 1 00:25:40.695 01:08:15 -- scripts/common.sh@352 -- # local d=1 00:25:40.695 01:08:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.695 01:08:15 -- scripts/common.sh@354 -- # echo 1 00:25:40.695 01:08:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:40.695 01:08:15 -- scripts/common.sh@365 -- # decimal 2 00:25:40.695 01:08:15 -- scripts/common.sh@352 -- # local d=2 00:25:40.695 01:08:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.695 01:08:15 -- scripts/common.sh@354 -- # echo 2 00:25:40.695 01:08:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:40.695 01:08:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:40.695 01:08:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:40.695 01:08:15 -- scripts/common.sh@367 -- # return 0 00:25:40.695 01:08:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.695 01:08:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.695 --rc genhtml_branch_coverage=1 00:25:40.695 --rc genhtml_function_coverage=1 00:25:40.695 --rc genhtml_legend=1 00:25:40.695 --rc geninfo_all_blocks=1 00:25:40.695 --rc geninfo_unexecuted_blocks=1 00:25:40.695 00:25:40.695 ' 00:25:40.695 01:08:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.695 --rc genhtml_branch_coverage=1 00:25:40.695 --rc genhtml_function_coverage=1 00:25:40.695 --rc genhtml_legend=1 00:25:40.695 --rc geninfo_all_blocks=1 00:25:40.695 --rc geninfo_unexecuted_blocks=1 00:25:40.695 00:25:40.695 ' 00:25:40.695 01:08:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.695 --rc genhtml_branch_coverage=1 00:25:40.695 --rc genhtml_function_coverage=1 00:25:40.695 --rc genhtml_legend=1 00:25:40.695 --rc geninfo_all_blocks=1 00:25:40.695 --rc geninfo_unexecuted_blocks=1 00:25:40.695 00:25:40.695 ' 00:25:40.695 01:08:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.695 --rc genhtml_branch_coverage=1 00:25:40.695 --rc genhtml_function_coverage=1 00:25:40.695 --rc genhtml_legend=1 00:25:40.695 --rc geninfo_all_blocks=1 00:25:40.695 --rc geninfo_unexecuted_blocks=1 00:25:40.695 00:25:40.695 ' 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:25:40.695 01:08:15 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:25:40.695 01:08:15 -- bdevperf/common.sh@8 -- # local job_section=global 00:25:40.695 01:08:15 -- bdevperf/common.sh@9 -- # local rw=read 00:25:40.695 01:08:15 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:40.695 01:08:15 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:25:40.695 01:08:15 -- bdevperf/common.sh@13 -- # cat 00:25:40.695 01:08:15 -- bdevperf/common.sh@18 -- # job='[global]' 00:25:40.695 00:25:40.695 01:08:15 -- bdevperf/common.sh@19 -- # echo 00:25:40.695 01:08:15 -- bdevperf/common.sh@20 -- # cat 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@18 -- # create_job job0 00:25:40.695 01:08:15 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:40.695 01:08:15 -- bdevperf/common.sh@9 -- # local rw= 00:25:40.695 01:08:15 -- bdevperf/common.sh@10 -- # local filename= 00:25:40.695 01:08:15 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:40.695 01:08:15 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:40.695 00:25:40.695 01:08:15 -- bdevperf/common.sh@19 -- # echo 00:25:40.695 01:08:15 -- bdevperf/common.sh@20 -- # cat 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@19 -- # create_job job1 00:25:40.695 01:08:15 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:40.695 01:08:15 -- bdevperf/common.sh@9 -- # local rw= 00:25:40.695 01:08:15 -- bdevperf/common.sh@10 -- # local filename= 00:25:40.695 01:08:15 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:40.695 01:08:15 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:40.695 00:25:40.695 01:08:15 -- bdevperf/common.sh@19 -- # echo 00:25:40.695 01:08:15 -- bdevperf/common.sh@20 -- # cat 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@20 -- # create_job job2 00:25:40.695 01:08:15 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:40.695 01:08:15 -- bdevperf/common.sh@9 -- # local rw= 00:25:40.695 01:08:15 -- bdevperf/common.sh@10 -- # local filename= 00:25:40.695 01:08:15 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:40.695 01:08:15 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:40.695 00:25:40.695 01:08:15 -- bdevperf/common.sh@19 -- # echo 00:25:40.695 01:08:15 -- bdevperf/common.sh@20 -- # cat 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@21 -- # create_job job3 00:25:40.695 01:08:15 -- bdevperf/common.sh@8 -- # local job_section=job3 00:25:40.695 01:08:15 -- bdevperf/common.sh@9 -- # local rw= 00:25:40.695 01:08:15 -- bdevperf/common.sh@10 -- # local filename= 00:25:40.695 01:08:15 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:25:40.695 01:08:15 -- bdevperf/common.sh@18 -- # job='[job3]' 00:25:40.695 00:25:40.695 01:08:15 -- bdevperf/common.sh@19 -- # echo 00:25:40.695 01:08:15 -- bdevperf/common.sh@20 -- # cat 00:25:40.695 01:08:15 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:43.987 01:08:18 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-18 01:08:15.159681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:43.987 [2024-11-18 01:08:15.159952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142773 ] 00:25:43.987 Using job config with 4 jobs 00:25:43.987 [2024-11-18 01:08:15.315500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.987 [2024-11-18 01:08:15.402789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.987 cpumask for '\''job0'\'' is too big 00:25:43.987 cpumask for '\''job1'\'' is too big 00:25:43.987 cpumask for '\''job2'\'' is too big 00:25:43.987 cpumask for '\''job3'\'' is too big 00:25:43.987 Running I/O for 2 seconds... 00:25:43.987 00:25:43.987 Latency(us) 00:25:43.987 [2024-11-18T01:08:18.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.987 [2024-11-18T01:08:18.386Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.987 Malloc0 : 2.01 35622.45 34.79 0.00 0.00 7180.14 1513.57 11858.90 00:25:43.987 [2024-11-18T01:08:18.386Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.987 Malloc0 : 2.01 35597.56 34.76 0.00 0.00 7173.38 1396.54 10423.34 00:25:43.987 [2024-11-18T01:08:18.386Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.987 Malloc0 : 2.01 35574.17 34.74 0.00 0.00 7165.62 1341.93 9050.21 00:25:43.987 [2024-11-18T01:08:18.386Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.987 Malloc0 : 2.02 35641.70 34.81 0.00 0.00 7141.07 690.47 7957.94 00:25:43.987 [2024-11-18T01:08:18.386Z] =================================================================================================================== 00:25:43.987 [2024-11-18T01:08:18.387Z] Total : 142435.89 139.10 0.00 0.00 7165.03 690.47 11858.90' 00:25:43.988 01:08:18 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-18 01:08:15.159681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:43.988 [2024-11-18 01:08:15.159952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142773 ] 00:25:43.988 Using job config with 4 jobs 00:25:43.988 [2024-11-18 01:08:15.315500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.988 [2024-11-18 01:08:15.402789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.988 cpumask for '\''job0'\'' is too big 00:25:43.988 cpumask for '\''job1'\'' is too big 00:25:43.988 cpumask for '\''job2'\'' is too big 00:25:43.988 cpumask for '\''job3'\'' is too big 00:25:43.988 Running I/O for 2 seconds... 00:25:43.988 00:25:43.988 Latency(us) 00:25:43.988 [2024-11-18T01:08:18.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.01 35622.45 34.79 0.00 0.00 7180.14 1513.57 11858.90 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.01 35597.56 34.76 0.00 0.00 7173.38 1396.54 10423.34 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.01 35574.17 34.74 0.00 0.00 7165.62 1341.93 9050.21 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.02 35641.70 34.81 0.00 0.00 7141.07 690.47 7957.94 00:25:43.988 [2024-11-18T01:08:18.387Z] =================================================================================================================== 00:25:43.988 [2024-11-18T01:08:18.387Z] Total : 142435.89 139.10 0.00 0.00 7165.03 690.47 11858.90' 00:25:43.988 01:08:18 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 01:08:15.159681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:43.988 [2024-11-18 01:08:15.159952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142773 ] 00:25:43.988 Using job config with 4 jobs 00:25:43.988 [2024-11-18 01:08:15.315500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.988 [2024-11-18 01:08:15.402789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.988 cpumask for '\''job0'\'' is too big 00:25:43.988 cpumask for '\''job1'\'' is too big 00:25:43.988 cpumask for '\''job2'\'' is too big 00:25:43.988 cpumask for '\''job3'\'' is too big 00:25:43.988 Running I/O for 2 seconds... 00:25:43.988 00:25:43.988 Latency(us) 00:25:43.988 [2024-11-18T01:08:18.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.01 35622.45 34.79 0.00 0.00 7180.14 1513.57 11858.90 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.01 35597.56 34.76 0.00 0.00 7173.38 1396.54 10423.34 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.01 35574.17 34.74 0.00 0.00 7165.62 1341.93 9050.21 00:25:43.988 [2024-11-18T01:08:18.387Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:43.988 Malloc0 : 2.02 35641.70 34.81 0.00 0.00 7141.07 690.47 7957.94 00:25:43.988 [2024-11-18T01:08:18.387Z] =================================================================================================================== 00:25:43.988 [2024-11-18T01:08:18.387Z] Total : 142435.89 139.10 0.00 0.00 7165.03 690.47 11858.90' 00:25:43.988 01:08:18 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:43.988 01:08:18 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:43.988 01:08:18 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:25:43.988 01:08:18 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:43.988 [2024-11-18 01:08:18.167921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:43.988 [2024-11-18 01:08:18.168103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142818 ] 00:25:43.988 [2024-11-18 01:08:18.307690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.247 [2024-11-18 01:08:18.395883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.247 cpumask for 'job0' is too big 00:25:44.247 cpumask for 'job1' is too big 00:25:44.247 cpumask for 'job2' is too big 00:25:44.247 cpumask for 'job3' is too big 00:25:46.784 01:08:21 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:25:46.784 Running I/O for 2 seconds... 00:25:46.784 00:25:46.784 Latency(us) 00:25:46.784 [2024-11-18T01:08:21.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.784 [2024-11-18T01:08:21.183Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:46.784 Malloc0 : 2.01 34845.87 34.03 0.00 0.00 7340.78 1396.54 11484.40 00:25:46.784 [2024-11-18T01:08:21.183Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:46.784 Malloc0 : 2.01 34823.52 34.01 0.00 0.00 7333.67 1349.73 10048.85 00:25:46.784 [2024-11-18T01:08:21.183Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:46.784 Malloc0 : 2.02 34801.47 33.99 0.00 0.00 7326.58 1302.92 8675.72 00:25:46.784 [2024-11-18T01:08:21.183Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:46.784 Malloc0 : 2.02 34779.44 33.96 0.00 0.00 7319.53 1341.93 8301.23 00:25:46.784 [2024-11-18T01:08:21.183Z] =================================================================================================================== 00:25:46.784 [2024-11-18T01:08:21.183Z] Total : 139250.29 135.99 0.00 0.00 7330.14 1302.92 11484.40' 00:25:46.784 01:08:21 -- bdevperf/test_config.sh@27 -- # cleanup 00:25:46.784 01:08:21 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:46.785 01:08:21 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:25:46.785 01:08:21 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:46.785 01:08:21 -- bdevperf/common.sh@9 -- # local rw=write 00:25:46.785 01:08:21 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:46.785 01:08:21 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:46.785 01:08:21 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:46.785 00:25:46.785 01:08:21 -- bdevperf/common.sh@19 -- # echo 00:25:46.785 01:08:21 -- bdevperf/common.sh@20 -- # cat 00:25:46.785 01:08:21 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:25:46.785 01:08:21 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:46.785 01:08:21 -- bdevperf/common.sh@9 -- # local rw=write 00:25:46.785 01:08:21 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:46.785 01:08:21 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:46.785 01:08:21 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:46.785 00:25:46.785 01:08:21 -- bdevperf/common.sh@19 -- # echo 00:25:46.785 01:08:21 -- bdevperf/common.sh@20 -- # cat 00:25:46.785 01:08:21 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:25:46.785 01:08:21 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:46.785 01:08:21 -- bdevperf/common.sh@9 -- # local rw=write 00:25:46.785 01:08:21 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:46.785 01:08:21 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:46.785 01:08:21 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:46.785 00:25:46.785 01:08:21 -- bdevperf/common.sh@19 -- # echo 00:25:46.785 01:08:21 -- bdevperf/common.sh@20 -- # cat 00:25:46.785 01:08:21 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:50.076 01:08:24 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-18 01:08:21.157446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:50.076 [2024-11-18 01:08:21.157663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142862 ] 00:25:50.076 Using job config with 3 jobs 00:25:50.076 [2024-11-18 01:08:21.298485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.076 [2024-11-18 01:08:21.382842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.076 cpumask for '\''job0'\'' is too big 00:25:50.076 cpumask for '\''job1'\'' is too big 00:25:50.076 cpumask for '\''job2'\'' is too big 00:25:50.076 Running I/O for 2 seconds... 00:25:50.076 00:25:50.076 Latency(us) 00:25:50.076 [2024-11-18T01:08:24.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46493.78 45.40 0.00 0.00 5500.68 1427.75 8238.81 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46465.60 45.38 0.00 0.00 5494.86 1380.94 6928.09 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46436.46 45.35 0.00 0.00 5489.80 1318.52 6241.52 00:25:50.076 [2024-11-18T01:08:24.475Z] =================================================================================================================== 00:25:50.076 [2024-11-18T01:08:24.475Z] Total : 139395.84 136.13 0.00 0.00 5495.11 1318.52 8238.81' 00:25:50.076 01:08:24 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-18 01:08:21.157446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:50.076 [2024-11-18 01:08:21.157663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142862 ] 00:25:50.076 Using job config with 3 jobs 00:25:50.076 [2024-11-18 01:08:21.298485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.076 [2024-11-18 01:08:21.382842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.076 cpumask for '\''job0'\'' is too big 00:25:50.076 cpumask for '\''job1'\'' is too big 00:25:50.076 cpumask for '\''job2'\'' is too big 00:25:50.076 Running I/O for 2 seconds... 00:25:50.076 00:25:50.076 Latency(us) 00:25:50.076 [2024-11-18T01:08:24.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46493.78 45.40 0.00 0.00 5500.68 1427.75 8238.81 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46465.60 45.38 0.00 0.00 5494.86 1380.94 6928.09 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46436.46 45.35 0.00 0.00 5489.80 1318.52 6241.52 00:25:50.076 [2024-11-18T01:08:24.475Z] =================================================================================================================== 00:25:50.076 [2024-11-18T01:08:24.475Z] Total : 139395.84 136.13 0.00 0.00 5495.11 1318.52 8238.81' 00:25:50.076 01:08:24 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 01:08:21.157446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:50.076 [2024-11-18 01:08:21.157663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142862 ] 00:25:50.076 Using job config with 3 jobs 00:25:50.076 [2024-11-18 01:08:21.298485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.076 [2024-11-18 01:08:21.382842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.076 cpumask for '\''job0'\'' is too big 00:25:50.076 cpumask for '\''job1'\'' is too big 00:25:50.076 cpumask for '\''job2'\'' is too big 00:25:50.076 Running I/O for 2 seconds... 00:25:50.076 00:25:50.076 Latency(us) 00:25:50.076 [2024-11-18T01:08:24.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46493.78 45.40 0.00 0.00 5500.68 1427.75 8238.81 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46465.60 45.38 0.00 0.00 5494.86 1380.94 6928.09 00:25:50.076 [2024-11-18T01:08:24.475Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:50.076 Malloc0 : 2.01 46436.46 45.35 0.00 0.00 5489.80 1318.52 6241.52 00:25:50.076 [2024-11-18T01:08:24.475Z] =================================================================================================================== 00:25:50.076 [2024-11-18T01:08:24.475Z] Total : 139395.84 136.13 0.00 0.00 5495.11 1318.52 8238.81' 00:25:50.076 01:08:24 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:50.076 01:08:24 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:50.076 01:08:24 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:25:50.076 01:08:24 -- bdevperf/test_config.sh@35 -- # cleanup 00:25:50.076 01:08:24 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:50.076 01:08:24 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:25:50.076 01:08:24 -- bdevperf/common.sh@8 -- # local job_section=global 00:25:50.076 01:08:24 -- bdevperf/common.sh@9 -- # local rw=rw 00:25:50.076 01:08:24 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:25:50.076 01:08:24 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:25:50.076 01:08:24 -- bdevperf/common.sh@13 -- # cat 00:25:50.076 01:08:24 -- bdevperf/common.sh@18 -- # job='[global]' 00:25:50.076 00:25:50.076 01:08:24 -- bdevperf/common.sh@19 -- # echo 00:25:50.076 01:08:24 -- bdevperf/common.sh@20 -- # cat 00:25:50.076 01:08:24 -- bdevperf/test_config.sh@38 -- # create_job job0 00:25:50.076 01:08:24 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:50.077 01:08:24 -- bdevperf/common.sh@9 -- # local rw= 00:25:50.077 01:08:24 -- bdevperf/common.sh@10 -- # local filename= 00:25:50.077 01:08:24 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:50.077 00:25:50.077 01:08:24 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:50.077 01:08:24 -- bdevperf/common.sh@19 -- # echo 00:25:50.077 01:08:24 -- bdevperf/common.sh@20 -- # cat 00:25:50.077 01:08:24 -- bdevperf/test_config.sh@39 -- # create_job job1 00:25:50.077 01:08:24 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:50.077 01:08:24 -- bdevperf/common.sh@9 -- # local rw= 00:25:50.077 01:08:24 -- bdevperf/common.sh@10 -- # local filename= 00:25:50.077 01:08:24 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:50.077 00:25:50.077 01:08:24 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:50.077 01:08:24 -- bdevperf/common.sh@19 -- # echo 00:25:50.077 01:08:24 -- bdevperf/common.sh@20 -- # cat 00:25:50.077 01:08:24 -- bdevperf/test_config.sh@40 -- # create_job job2 00:25:50.077 01:08:24 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:50.077 01:08:24 -- bdevperf/common.sh@9 -- # local rw= 00:25:50.077 01:08:24 -- bdevperf/common.sh@10 -- # local filename= 00:25:50.077 01:08:24 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:50.077 01:08:24 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:50.077 00:25:50.077 01:08:24 -- bdevperf/common.sh@19 -- # echo 00:25:50.077 01:08:24 -- bdevperf/common.sh@20 -- # cat 00:25:50.077 01:08:24 -- bdevperf/test_config.sh@41 -- # create_job job3 00:25:50.077 01:08:24 -- bdevperf/common.sh@8 -- # local job_section=job3 00:25:50.077 01:08:24 -- bdevperf/common.sh@9 -- # local rw= 00:25:50.077 01:08:24 -- bdevperf/common.sh@10 -- # local filename= 00:25:50.077 01:08:24 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:25:50.077 01:08:24 -- bdevperf/common.sh@18 -- # job='[job3]' 00:25:50.077 00:25:50.077 01:08:24 -- bdevperf/common.sh@19 -- # echo 00:25:50.077 01:08:24 -- bdevperf/common.sh@20 -- # cat 00:25:50.077 01:08:24 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:53.367 01:08:27 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-18 01:08:24.159353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:53.367 [2024-11-18 01:08:24.159582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142908 ] 00:25:53.367 Using job config with 4 jobs 00:25:53.367 [2024-11-18 01:08:24.299458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.367 [2024-11-18 01:08:24.391892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.367 cpumask for '\''job0'\'' is too big 00:25:53.367 cpumask for '\''job1'\'' is too big 00:25:53.367 cpumask for '\''job2'\'' is too big 00:25:53.367 cpumask for '\''job3'\'' is too big 00:25:53.367 Running I/O for 2 seconds... 00:25:53.367 00:25:53.367 Latency(us) 00:25:53.367 [2024-11-18T01:08:27.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc0 : 2.03 17302.87 16.90 0.00 0.00 14784.51 2839.89 23842.62 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc1 : 2.03 17291.61 16.89 0.00 0.00 14784.97 3417.23 23842.62 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc0 : 2.03 17281.35 16.88 0.00 0.00 14755.22 2793.08 20971.52 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc1 : 2.03 17270.82 16.87 0.00 0.00 14751.85 3261.20 20846.69 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc0 : 2.03 17260.58 16.86 0.00 0.00 14725.67 2871.10 17975.59 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc1 : 2.03 17250.08 16.85 0.00 0.00 14724.81 3417.23 17975.59 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc0 : 2.03 17239.90 16.84 0.00 0.00 14693.25 2808.69 15728.64 00:25:53.367 [2024-11-18T01:08:27.766Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.367 Malloc1 : 2.04 17229.39 16.83 0.00 0.00 14691.30 3308.01 15728.64 00:25:53.367 [2024-11-18T01:08:27.766Z] =================================================================================================================== 00:25:53.367 [2024-11-18T01:08:27.766Z] Total : 138126.60 134.89 0.00 0.00 14738.95 2793.08 23842.62' 00:25:53.367 01:08:27 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-18 01:08:24.159353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:53.368 [2024-11-18 01:08:24.159582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142908 ] 00:25:53.368 Using job config with 4 jobs 00:25:53.368 [2024-11-18 01:08:24.299458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.368 [2024-11-18 01:08:24.391892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.368 cpumask for '\''job0'\'' is too big 00:25:53.368 cpumask for '\''job1'\'' is too big 00:25:53.368 cpumask for '\''job2'\'' is too big 00:25:53.368 cpumask for '\''job3'\'' is too big 00:25:53.368 Running I/O for 2 seconds... 00:25:53.368 00:25:53.368 Latency(us) 00:25:53.368 [2024-11-18T01:08:27.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17302.87 16.90 0.00 0.00 14784.51 2839.89 23842.62 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.03 17291.61 16.89 0.00 0.00 14784.97 3417.23 23842.62 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17281.35 16.88 0.00 0.00 14755.22 2793.08 20971.52 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.03 17270.82 16.87 0.00 0.00 14751.85 3261.20 20846.69 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17260.58 16.86 0.00 0.00 14725.67 2871.10 17975.59 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.03 17250.08 16.85 0.00 0.00 14724.81 3417.23 17975.59 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17239.90 16.84 0.00 0.00 14693.25 2808.69 15728.64 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.04 17229.39 16.83 0.00 0.00 14691.30 3308.01 15728.64 00:25:53.368 [2024-11-18T01:08:27.767Z] =================================================================================================================== 00:25:53.368 [2024-11-18T01:08:27.767Z] Total : 138126.60 134.89 0.00 0.00 14738.95 2793.08 23842.62' 00:25:53.368 01:08:27 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:53.368 01:08:27 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 01:08:24.159353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:53.368 [2024-11-18 01:08:24.159582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142908 ] 00:25:53.368 Using job config with 4 jobs 00:25:53.368 [2024-11-18 01:08:24.299458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.368 [2024-11-18 01:08:24.391892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.368 cpumask for '\''job0'\'' is too big 00:25:53.368 cpumask for '\''job1'\'' is too big 00:25:53.368 cpumask for '\''job2'\'' is too big 00:25:53.368 cpumask for '\''job3'\'' is too big 00:25:53.368 Running I/O for 2 seconds... 00:25:53.368 00:25:53.368 Latency(us) 00:25:53.368 [2024-11-18T01:08:27.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17302.87 16.90 0.00 0.00 14784.51 2839.89 23842.62 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.03 17291.61 16.89 0.00 0.00 14784.97 3417.23 23842.62 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17281.35 16.88 0.00 0.00 14755.22 2793.08 20971.52 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.03 17270.82 16.87 0.00 0.00 14751.85 3261.20 20846.69 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17260.58 16.86 0.00 0.00 14725.67 2871.10 17975.59 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.03 17250.08 16.85 0.00 0.00 14724.81 3417.23 17975.59 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc0 : 2.03 17239.90 16.84 0.00 0.00 14693.25 2808.69 15728.64 00:25:53.368 [2024-11-18T01:08:27.767Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:53.368 Malloc1 : 2.04 17229.39 16.83 0.00 0.00 14691.30 3308.01 15728.64 00:25:53.368 [2024-11-18T01:08:27.767Z] =================================================================================================================== 00:25:53.368 [2024-11-18T01:08:27.767Z] Total : 138126.60 134.89 0.00 0.00 14738.95 2793.08 23842.62' 00:25:53.368 01:08:27 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:53.368 01:08:27 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:25:53.368 01:08:27 -- bdevperf/test_config.sh@44 -- # cleanup 00:25:53.368 01:08:27 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:53.368 01:08:27 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:53.368 00:25:53.368 real 0m12.268s 00:25:53.368 user 0m10.347s 00:25:53.368 sys 0m1.389s 00:25:53.368 01:08:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:53.368 ************************************ 00:25:53.368 END TEST bdevperf_config 00:25:53.368 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:53.368 ************************************ 00:25:53.368 01:08:27 -- spdk/autotest.sh@185 -- # uname -s 00:25:53.368 01:08:27 -- spdk/autotest.sh@185 -- # [[ Linux == Linux ]] 00:25:53.368 01:08:27 -- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:53.368 01:08:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:53.368 01:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.368 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:53.368 ************************************ 00:25:53.368 START TEST reactor_set_interrupt 00:25:53.368 ************************************ 00:25:53.368 01:08:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:53.368 * Looking for test storage... 00:25:53.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.368 01:08:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:53.368 01:08:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:53.368 01:08:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:53.368 01:08:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:53.368 01:08:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:53.368 01:08:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:53.368 01:08:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:53.368 01:08:27 -- scripts/common.sh@335 -- # IFS=.-: 00:25:53.368 01:08:27 -- scripts/common.sh@335 -- # read -ra ver1 00:25:53.368 01:08:27 -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.368 01:08:27 -- scripts/common.sh@336 -- # read -ra ver2 00:25:53.368 01:08:27 -- scripts/common.sh@337 -- # local 'op=<' 00:25:53.368 01:08:27 -- scripts/common.sh@339 -- # ver1_l=2 00:25:53.368 01:08:27 -- scripts/common.sh@340 -- # ver2_l=1 00:25:53.368 01:08:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:53.368 01:08:27 -- scripts/common.sh@343 -- # case "$op" in 00:25:53.368 01:08:27 -- scripts/common.sh@344 -- # : 1 00:25:53.368 01:08:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:53.368 01:08:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.368 01:08:27 -- scripts/common.sh@364 -- # decimal 1 00:25:53.368 01:08:27 -- scripts/common.sh@352 -- # local d=1 00:25:53.368 01:08:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.368 01:08:27 -- scripts/common.sh@354 -- # echo 1 00:25:53.368 01:08:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:53.368 01:08:27 -- scripts/common.sh@365 -- # decimal 2 00:25:53.368 01:08:27 -- scripts/common.sh@352 -- # local d=2 00:25:53.368 01:08:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.368 01:08:27 -- scripts/common.sh@354 -- # echo 2 00:25:53.368 01:08:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:53.368 01:08:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:53.368 01:08:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:53.368 01:08:27 -- scripts/common.sh@367 -- # return 0 00:25:53.368 01:08:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.368 01:08:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.368 --rc genhtml_branch_coverage=1 00:25:53.368 --rc genhtml_function_coverage=1 00:25:53.368 --rc genhtml_legend=1 00:25:53.368 --rc geninfo_all_blocks=1 00:25:53.368 --rc geninfo_unexecuted_blocks=1 00:25:53.368 00:25:53.368 ' 00:25:53.368 01:08:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.368 --rc genhtml_branch_coverage=1 00:25:53.368 --rc genhtml_function_coverage=1 00:25:53.368 --rc genhtml_legend=1 00:25:53.368 --rc geninfo_all_blocks=1 00:25:53.368 --rc geninfo_unexecuted_blocks=1 00:25:53.368 00:25:53.368 ' 00:25:53.368 01:08:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.368 --rc genhtml_branch_coverage=1 00:25:53.368 --rc genhtml_function_coverage=1 00:25:53.368 --rc genhtml_legend=1 00:25:53.368 --rc geninfo_all_blocks=1 00:25:53.368 --rc geninfo_unexecuted_blocks=1 00:25:53.368 00:25:53.368 ' 00:25:53.368 01:08:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.368 --rc genhtml_branch_coverage=1 00:25:53.368 --rc genhtml_function_coverage=1 00:25:53.368 --rc genhtml_legend=1 00:25:53.368 --rc geninfo_all_blocks=1 00:25:53.368 --rc geninfo_unexecuted_blocks=1 00:25:53.368 00:25:53.368 ' 00:25:53.368 01:08:27 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:25:53.368 01:08:27 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:53.368 01:08:27 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.368 01:08:27 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.368 01:08:27 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:25:53.368 01:08:27 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:53.368 01:08:27 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:25:53.368 01:08:27 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:25:53.368 01:08:27 -- common/autotest_common.sh@34 -- # set -e 00:25:53.368 01:08:27 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:25:53.368 01:08:27 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:25:53.369 01:08:27 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:53.369 01:08:27 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:53.369 01:08:27 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:53.369 01:08:27 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:25:53.369 01:08:27 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:53.369 01:08:27 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:53.369 01:08:27 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:25:53.369 01:08:27 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:53.369 01:08:27 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:53.369 01:08:27 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:53.369 01:08:27 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:53.369 01:08:27 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:53.369 01:08:27 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:53.369 01:08:27 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:53.369 01:08:27 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:53.369 01:08:27 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:53.369 01:08:27 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:53.369 01:08:27 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:53.369 01:08:27 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:53.369 01:08:27 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:53.369 01:08:27 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:53.369 01:08:27 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:53.369 01:08:27 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:53.369 01:08:27 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:53.369 01:08:27 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:53.369 01:08:27 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:53.369 01:08:27 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:53.369 01:08:27 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:25:53.369 01:08:27 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:53.369 01:08:27 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:25:53.369 01:08:27 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:53.369 01:08:27 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:53.369 01:08:27 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:53.369 01:08:27 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:53.369 01:08:27 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:53.369 01:08:27 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:53.369 01:08:27 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:53.369 01:08:27 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:25:53.369 01:08:27 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:53.369 01:08:27 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:53.369 01:08:27 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:53.369 01:08:27 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:53.369 01:08:27 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:25:53.369 01:08:27 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:53.369 01:08:27 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:25:53.369 01:08:27 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:53.369 01:08:27 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:53.369 01:08:27 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:25:53.369 01:08:27 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:25:53.369 01:08:27 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:53.369 01:08:27 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:25:53.369 01:08:27 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:25:53.369 01:08:27 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:25:53.369 01:08:27 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:25:53.369 01:08:27 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:25:53.369 01:08:27 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:25:53.369 01:08:27 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:25:53.369 01:08:27 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:25:53.369 01:08:27 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:25:53.369 01:08:27 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:25:53.369 01:08:27 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:25:53.369 01:08:27 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:25:53.369 01:08:27 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:53.369 01:08:27 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:25:53.369 01:08:27 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:25:53.369 01:08:27 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:25:53.369 01:08:27 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:25:53.369 01:08:27 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:53.369 01:08:27 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:25:53.369 01:08:27 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:25:53.369 01:08:27 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:25:53.369 01:08:27 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:25:53.369 01:08:27 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:25:53.369 01:08:27 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:25:53.369 01:08:27 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:25:53.369 01:08:27 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:25:53.369 01:08:27 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:25:53.369 01:08:27 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:25:53.369 01:08:27 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:53.369 01:08:27 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:25:53.369 01:08:27 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:25:53.369 01:08:27 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:53.369 01:08:27 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:53.369 01:08:27 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:25:53.369 01:08:27 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:25:53.369 01:08:27 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:25:53.369 01:08:27 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:25:53.369 01:08:27 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:25:53.369 01:08:27 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:25:53.369 01:08:27 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:25:53.369 01:08:27 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:25:53.369 01:08:27 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:25:53.369 01:08:27 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:25:53.369 01:08:27 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:25:53.369 01:08:27 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:25:53.369 01:08:27 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:25:53.369 01:08:27 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:25:53.369 #define SPDK_CONFIG_H 00:25:53.369 #define SPDK_CONFIG_APPS 1 00:25:53.369 #define SPDK_CONFIG_ARCH native 00:25:53.369 #define SPDK_CONFIG_ASAN 1 00:25:53.369 #undef SPDK_CONFIG_AVAHI 00:25:53.369 #undef SPDK_CONFIG_CET 00:25:53.369 #define SPDK_CONFIG_COVERAGE 1 00:25:53.369 #define SPDK_CONFIG_CROSS_PREFIX 00:25:53.369 #undef SPDK_CONFIG_CRYPTO 00:25:53.369 #undef SPDK_CONFIG_CRYPTO_MLX5 00:25:53.369 #undef SPDK_CONFIG_CUSTOMOCF 00:25:53.369 #undef SPDK_CONFIG_DAOS 00:25:53.369 #define SPDK_CONFIG_DAOS_DIR 00:25:53.369 #define SPDK_CONFIG_DEBUG 1 00:25:53.369 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:25:53.369 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:25:53.369 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:25:53.369 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:25:53.369 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:25:53.369 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:53.369 #define SPDK_CONFIG_EXAMPLES 1 00:25:53.369 #undef SPDK_CONFIG_FC 00:25:53.369 #define SPDK_CONFIG_FC_PATH 00:25:53.369 #define SPDK_CONFIG_FIO_PLUGIN 1 00:25:53.369 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:25:53.369 #undef SPDK_CONFIG_FUSE 00:25:53.369 #undef SPDK_CONFIG_FUZZER 00:25:53.369 #define SPDK_CONFIG_FUZZER_LIB 00:25:53.369 #undef SPDK_CONFIG_GOLANG 00:25:53.369 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:25:53.369 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:25:53.369 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:25:53.369 #undef SPDK_CONFIG_HAVE_LIBBSD 00:25:53.369 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:25:53.369 #define SPDK_CONFIG_IDXD 1 00:25:53.369 #undef SPDK_CONFIG_IDXD_KERNEL 00:25:53.369 #undef SPDK_CONFIG_IPSEC_MB 00:25:53.369 #define SPDK_CONFIG_IPSEC_MB_DIR 00:25:53.369 #define SPDK_CONFIG_ISAL 1 00:25:53.369 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:25:53.369 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:25:53.369 #define SPDK_CONFIG_LIBDIR 00:25:53.369 #undef SPDK_CONFIG_LTO 00:25:53.369 #define SPDK_CONFIG_MAX_LCORES 00:25:53.369 #define SPDK_CONFIG_NVME_CUSE 1 00:25:53.369 #undef SPDK_CONFIG_OCF 00:25:53.369 #define SPDK_CONFIG_OCF_PATH 00:25:53.369 #define SPDK_CONFIG_OPENSSL_PATH 00:25:53.369 #undef SPDK_CONFIG_PGO_CAPTURE 00:25:53.369 #undef SPDK_CONFIG_PGO_USE 00:25:53.369 #define SPDK_CONFIG_PREFIX /usr/local 00:25:53.369 #define SPDK_CONFIG_RAID5F 1 00:25:53.369 #undef SPDK_CONFIG_RBD 00:25:53.369 #define SPDK_CONFIG_RDMA 1 00:25:53.369 #define SPDK_CONFIG_RDMA_PROV verbs 00:25:53.369 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:25:53.369 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:25:53.369 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:25:53.369 #undef SPDK_CONFIG_SHARED 00:25:53.369 #undef SPDK_CONFIG_SMA 00:25:53.369 #define SPDK_CONFIG_TESTS 1 00:25:53.369 #undef SPDK_CONFIG_TSAN 00:25:53.369 #undef SPDK_CONFIG_UBLK 00:25:53.369 #define SPDK_CONFIG_UBSAN 1 00:25:53.369 #define SPDK_CONFIG_UNIT_TESTS 1 00:25:53.369 #undef SPDK_CONFIG_URING 00:25:53.369 #define SPDK_CONFIG_URING_PATH 00:25:53.369 #undef SPDK_CONFIG_URING_ZNS 00:25:53.370 #undef SPDK_CONFIG_USDT 00:25:53.370 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:25:53.370 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:25:53.370 #undef SPDK_CONFIG_VFIO_USER 00:25:53.370 #define SPDK_CONFIG_VFIO_USER_DIR 00:25:53.370 #define SPDK_CONFIG_VHOST 1 00:25:53.370 #define SPDK_CONFIG_VIRTIO 1 00:25:53.370 #undef SPDK_CONFIG_VTUNE 00:25:53.370 #define SPDK_CONFIG_VTUNE_DIR 00:25:53.370 #define SPDK_CONFIG_WERROR 1 00:25:53.370 #define SPDK_CONFIG_WPDK_DIR 00:25:53.370 #undef SPDK_CONFIG_XNVME 00:25:53.370 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:25:53.370 01:08:27 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:25:53.370 01:08:27 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.370 01:08:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.370 01:08:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.370 01:08:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.370 01:08:27 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:53.370 01:08:27 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:53.370 01:08:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:53.370 01:08:27 -- paths/export.sh@5 -- # export PATH 00:25:53.370 01:08:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:53.370 01:08:27 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:53.370 01:08:27 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:53.370 01:08:27 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:53.370 01:08:27 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:53.370 01:08:27 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:25:53.370 01:08:27 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:25:53.370 01:08:27 -- pm/common@16 -- # TEST_TAG=N/A 00:25:53.370 01:08:27 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:25:53.370 01:08:27 -- common/autotest_common.sh@52 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:25:53.370 01:08:27 -- common/autotest_common.sh@56 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:25:53.370 01:08:27 -- common/autotest_common.sh@58 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:25:53.370 01:08:27 -- common/autotest_common.sh@60 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:25:53.370 01:08:27 -- common/autotest_common.sh@62 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:25:53.370 01:08:27 -- common/autotest_common.sh@64 -- # : 00:25:53.370 01:08:27 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:25:53.370 01:08:27 -- common/autotest_common.sh@66 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:25:53.370 01:08:27 -- common/autotest_common.sh@68 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:25:53.370 01:08:27 -- common/autotest_common.sh@70 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:25:53.370 01:08:27 -- common/autotest_common.sh@72 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:25:53.370 01:08:27 -- common/autotest_common.sh@74 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:25:53.370 01:08:27 -- common/autotest_common.sh@76 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:25:53.370 01:08:27 -- common/autotest_common.sh@78 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:25:53.370 01:08:27 -- common/autotest_common.sh@80 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:25:53.370 01:08:27 -- common/autotest_common.sh@82 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:25:53.370 01:08:27 -- common/autotest_common.sh@84 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:25:53.370 01:08:27 -- common/autotest_common.sh@86 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:25:53.370 01:08:27 -- common/autotest_common.sh@88 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:25:53.370 01:08:27 -- common/autotest_common.sh@90 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:25:53.370 01:08:27 -- common/autotest_common.sh@92 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:25:53.370 01:08:27 -- common/autotest_common.sh@94 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:25:53.370 01:08:27 -- common/autotest_common.sh@96 -- # : rdma 00:25:53.370 01:08:27 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:25:53.370 01:08:27 -- common/autotest_common.sh@98 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:25:53.370 01:08:27 -- common/autotest_common.sh@100 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:25:53.370 01:08:27 -- common/autotest_common.sh@102 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:25:53.370 01:08:27 -- common/autotest_common.sh@104 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:25:53.370 01:08:27 -- common/autotest_common.sh@106 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:25:53.370 01:08:27 -- common/autotest_common.sh@108 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:25:53.370 01:08:27 -- common/autotest_common.sh@110 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:25:53.370 01:08:27 -- common/autotest_common.sh@112 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:25:53.370 01:08:27 -- common/autotest_common.sh@114 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:25:53.370 01:08:27 -- common/autotest_common.sh@116 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:25:53.370 01:08:27 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:25:53.370 01:08:27 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:25:53.370 01:08:27 -- common/autotest_common.sh@120 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:25:53.370 01:08:27 -- common/autotest_common.sh@122 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:25:53.370 01:08:27 -- common/autotest_common.sh@124 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:25:53.370 01:08:27 -- common/autotest_common.sh@126 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:25:53.370 01:08:27 -- common/autotest_common.sh@128 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:25:53.370 01:08:27 -- common/autotest_common.sh@130 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:25:53.370 01:08:27 -- common/autotest_common.sh@132 -- # : v22.11.4 00:25:53.370 01:08:27 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:25:53.370 01:08:27 -- common/autotest_common.sh@134 -- # : true 00:25:53.370 01:08:27 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:25:53.370 01:08:27 -- common/autotest_common.sh@136 -- # : 1 00:25:53.370 01:08:27 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:25:53.370 01:08:27 -- common/autotest_common.sh@138 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:25:53.370 01:08:27 -- common/autotest_common.sh@140 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:25:53.370 01:08:27 -- common/autotest_common.sh@142 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:25:53.370 01:08:27 -- common/autotest_common.sh@144 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:25:53.370 01:08:27 -- common/autotest_common.sh@146 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:25:53.370 01:08:27 -- common/autotest_common.sh@148 -- # : 00:25:53.370 01:08:27 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:25:53.370 01:08:27 -- common/autotest_common.sh@150 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:25:53.370 01:08:27 -- common/autotest_common.sh@152 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:25:53.370 01:08:27 -- common/autotest_common.sh@154 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:25:53.370 01:08:27 -- common/autotest_common.sh@156 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:25:53.370 01:08:27 -- common/autotest_common.sh@158 -- # : 0 00:25:53.370 01:08:27 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:25:53.370 01:08:27 -- common/autotest_common.sh@160 -- # : 0 00:25:53.371 01:08:27 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:25:53.371 01:08:27 -- common/autotest_common.sh@163 -- # : 00:25:53.371 01:08:27 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:25:53.371 01:08:27 -- common/autotest_common.sh@165 -- # : 0 00:25:53.371 01:08:27 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:25:53.371 01:08:27 -- common/autotest_common.sh@167 -- # : 0 00:25:53.371 01:08:27 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:25:53.371 01:08:27 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:53.371 01:08:27 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:25:53.371 01:08:27 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:25:53.371 01:08:27 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:53.371 01:08:27 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:53.371 01:08:27 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:25:53.371 01:08:27 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:25:53.371 01:08:27 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:53.371 01:08:27 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:53.371 01:08:27 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:53.371 01:08:27 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:53.371 01:08:27 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:25:53.371 01:08:27 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:25:53.371 01:08:27 -- common/autotest_common.sh@196 -- # cat 00:25:53.371 01:08:27 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:25:53.371 01:08:27 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:53.371 01:08:27 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:53.371 01:08:27 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:53.371 01:08:27 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:53.371 01:08:27 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:25:53.371 01:08:27 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:25:53.371 01:08:27 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:53.371 01:08:27 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:53.371 01:08:27 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:53.371 01:08:27 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:53.371 01:08:27 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:25:53.371 01:08:27 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:25:53.371 01:08:27 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:53.371 01:08:27 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:53.371 01:08:27 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:53.371 01:08:27 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:53.371 01:08:27 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:53.371 01:08:27 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:53.371 01:08:27 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:25:53.371 01:08:27 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:25:53.371 01:08:27 -- common/autotest_common.sh@249 -- # _LCOV= 00:25:53.371 01:08:27 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:25:53.371 01:08:27 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:25:53.371 01:08:27 -- common/autotest_common.sh@255 -- # lcov_opt= 00:25:53.371 01:08:27 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:25:53.371 01:08:27 -- common/autotest_common.sh@259 -- # export valgrind= 00:25:53.371 01:08:27 -- common/autotest_common.sh@259 -- # valgrind= 00:25:53.371 01:08:27 -- common/autotest_common.sh@265 -- # uname -s 00:25:53.371 01:08:27 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:25:53.371 01:08:27 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:25:53.371 01:08:27 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:25:53.371 01:08:27 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:25:53.371 01:08:27 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@275 -- # MAKE=make 00:25:53.371 01:08:27 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:25:53.371 01:08:27 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:25:53.371 01:08:27 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:25:53.371 01:08:27 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:25:53.371 01:08:27 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:25:53.371 01:08:27 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:25:53.371 01:08:27 -- common/autotest_common.sh@319 -- # [[ -z 142989 ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@319 -- # kill -0 142989 00:25:53.371 01:08:27 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:25:53.371 01:08:27 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:25:53.371 01:08:27 -- common/autotest_common.sh@332 -- # local mount target_dir 00:25:53.371 01:08:27 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:25:53.371 01:08:27 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:25:53.371 01:08:27 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:25:53.371 01:08:27 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:25:53.371 01:08:27 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.Zija69 00:25:53.371 01:08:27 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:25:53.371 01:08:27 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:25:53.371 01:08:27 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.Zija69/tests/interrupt /tmp/spdk.Zija69 00:25:53.371 01:08:27 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:25:53.371 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.371 01:08:27 -- common/autotest_common.sh@328 -- # df -T 00:25:53.371 01:08:27 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200 00:25:53.371 01:08:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=4726784 00:25:53.371 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=9433841664 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:25:53.371 01:08:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=11166175232 00:25:53.371 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267146240 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268403712 00:25:53.371 01:08:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:25:53.371 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:25:53.371 01:08:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:25:53.371 01:08:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:25:53.371 01:08:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:25:53.372 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.372 01:08:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:25:53.372 01:08:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:25:53.372 01:08:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:25:53.372 01:08:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:25:53.372 01:08:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:25:53.372 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.372 01:08:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:25:53.372 01:08:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:25:53.372 01:08:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008 00:25:53.372 01:08:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:25:53.372 01:08:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:25:53.372 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.372 01:08:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:25:53.372 01:08:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:25:53.372 01:08:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=97122795520 00:25:53.372 01:08:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:25:53.372 01:08:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=2579984384 00:25:53.372 01:08:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:25:53.372 01:08:27 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:25:53.372 * Looking for test storage... 00:25:53.372 01:08:27 -- common/autotest_common.sh@369 -- # local target_space new_size 00:25:53.372 01:08:27 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:25:53.372 01:08:27 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.372 01:08:27 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:25:53.372 01:08:27 -- common/autotest_common.sh@373 -- # mount=/ 00:25:53.372 01:08:27 -- common/autotest_common.sh@375 -- # target_space=9433841664 00:25:53.372 01:08:27 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:25:53.372 01:08:27 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:25:53.372 01:08:27 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:25:53.372 01:08:27 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:25:53.372 01:08:27 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:25:53.372 01:08:27 -- common/autotest_common.sh@382 -- # new_size=13380767744 00:25:53.372 01:08:27 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:25:53.372 01:08:27 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.372 01:08:27 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.372 01:08:27 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:53.372 01:08:27 -- common/autotest_common.sh@390 -- # return 0 00:25:53.372 01:08:27 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:25:53.372 01:08:27 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:25:53.372 01:08:27 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:25:53.372 01:08:27 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:25:53.372 01:08:27 -- common/autotest_common.sh@1682 -- # true 00:25:53.372 01:08:27 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:25:53.372 01:08:27 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:25:53.372 01:08:27 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:25:53.372 01:08:27 -- common/autotest_common.sh@27 -- # exec 00:25:53.372 01:08:27 -- common/autotest_common.sh@29 -- # exec 00:25:53.372 01:08:27 -- common/autotest_common.sh@31 -- # xtrace_restore 00:25:53.372 01:08:27 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:25:53.372 01:08:27 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:25:53.372 01:08:27 -- common/autotest_common.sh@18 -- # set -x 00:25:53.372 01:08:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:53.372 01:08:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:53.372 01:08:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:53.372 01:08:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:53.372 01:08:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:53.372 01:08:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:53.372 01:08:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:53.372 01:08:27 -- scripts/common.sh@335 -- # IFS=.-: 00:25:53.372 01:08:27 -- scripts/common.sh@335 -- # read -ra ver1 00:25:53.372 01:08:27 -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.372 01:08:27 -- scripts/common.sh@336 -- # read -ra ver2 00:25:53.372 01:08:27 -- scripts/common.sh@337 -- # local 'op=<' 00:25:53.372 01:08:27 -- scripts/common.sh@339 -- # ver1_l=2 00:25:53.372 01:08:27 -- scripts/common.sh@340 -- # ver2_l=1 00:25:53.372 01:08:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:53.372 01:08:27 -- scripts/common.sh@343 -- # case "$op" in 00:25:53.372 01:08:27 -- scripts/common.sh@344 -- # : 1 00:25:53.372 01:08:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:53.372 01:08:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.372 01:08:27 -- scripts/common.sh@364 -- # decimal 1 00:25:53.372 01:08:27 -- scripts/common.sh@352 -- # local d=1 00:25:53.372 01:08:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.372 01:08:27 -- scripts/common.sh@354 -- # echo 1 00:25:53.372 01:08:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:53.372 01:08:27 -- scripts/common.sh@365 -- # decimal 2 00:25:53.372 01:08:27 -- scripts/common.sh@352 -- # local d=2 00:25:53.372 01:08:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.372 01:08:27 -- scripts/common.sh@354 -- # echo 2 00:25:53.372 01:08:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:53.372 01:08:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:53.372 01:08:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:53.372 01:08:27 -- scripts/common.sh@367 -- # return 0 00:25:53.372 01:08:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.372 01:08:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:53.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.372 --rc genhtml_branch_coverage=1 00:25:53.372 --rc genhtml_function_coverage=1 00:25:53.372 --rc genhtml_legend=1 00:25:53.372 --rc geninfo_all_blocks=1 00:25:53.372 --rc geninfo_unexecuted_blocks=1 00:25:53.372 00:25:53.372 ' 00:25:53.372 01:08:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:53.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.372 --rc genhtml_branch_coverage=1 00:25:53.372 --rc genhtml_function_coverage=1 00:25:53.372 --rc genhtml_legend=1 00:25:53.372 --rc geninfo_all_blocks=1 00:25:53.372 --rc geninfo_unexecuted_blocks=1 00:25:53.372 00:25:53.372 ' 00:25:53.372 01:08:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:53.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.372 --rc genhtml_branch_coverage=1 00:25:53.372 --rc genhtml_function_coverage=1 00:25:53.372 --rc genhtml_legend=1 00:25:53.372 --rc geninfo_all_blocks=1 00:25:53.372 --rc geninfo_unexecuted_blocks=1 00:25:53.372 00:25:53.372 ' 00:25:53.372 01:08:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:53.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.372 --rc genhtml_branch_coverage=1 00:25:53.372 --rc genhtml_function_coverage=1 00:25:53.372 --rc genhtml_legend=1 00:25:53.372 --rc geninfo_all_blocks=1 00:25:53.372 --rc geninfo_unexecuted_blocks=1 00:25:53.372 00:25:53.372 ' 00:25:53.372 01:08:27 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.372 01:08:27 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:25:53.372 01:08:27 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:25:53.372 01:08:27 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:25:53.372 01:08:27 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:25:53.372 01:08:27 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:25:53.372 01:08:27 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:53.373 01:08:27 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:53.373 01:08:27 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:25:53.373 01:08:27 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.373 01:08:27 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:53.373 01:08:27 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143045 00:25:53.373 01:08:27 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:53.373 01:08:27 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143045 /var/tmp/spdk.sock 00:25:53.373 01:08:27 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:53.373 01:08:27 -- common/autotest_common.sh@829 -- # '[' -z 143045 ']' 00:25:53.373 01:08:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.373 01:08:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.373 01:08:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.373 01:08:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.373 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:53.373 [2024-11-18 01:08:27.766858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:53.373 [2024-11-18 01:08:27.767643] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143045 ] 00:25:53.632 [2024-11-18 01:08:27.932301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:53.632 [2024-11-18 01:08:28.008267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.632 [2024-11-18 01:08:28.010218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.632 [2024-11-18 01:08:28.010113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.891 [2024-11-18 01:08:28.125961] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:54.461 01:08:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.461 01:08:28 -- common/autotest_common.sh@862 -- # return 0 00:25:54.461 01:08:28 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:25:54.461 01:08:28 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.720 Malloc0 00:25:54.720 Malloc1 00:25:54.720 Malloc2 00:25:54.720 01:08:29 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:25:54.720 01:08:29 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:54.720 01:08:29 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:54.720 01:08:29 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:54.720 5000+0 records in 00:25:54.720 5000+0 records out 00:25:54.720 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0367134 s, 279 MB/s 00:25:54.720 01:08:29 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:54.989 AIO0 00:25:54.989 01:08:29 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 143045 00:25:54.989 01:08:29 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 143045 without_thd 00:25:54.989 01:08:29 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=143045 00:25:54.989 01:08:29 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:25:54.989 01:08:29 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:25:54.989 01:08:29 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:25:54.989 01:08:29 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:25:54.989 01:08:29 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:54.989 01:08:29 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:25:54.989 01:08:29 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:54.989 01:08:29 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:54.989 01:08:29 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:55.248 01:08:29 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:25:55.248 01:08:29 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:25:55.248 01:08:29 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:25:55.248 01:08:29 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:25:55.248 01:08:29 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:55.248 01:08:29 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:25:55.248 01:08:29 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:55.248 01:08:29 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:55.248 01:08:29 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:25:55.506 spdk_thread ids are 1 on reactor0. 00:25:55.506 01:08:29 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:25:55.506 01:08:29 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:25:55.506 01:08:29 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:55.506 01:08:29 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143045 0 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143045 0 idle 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@33 -- # local pid=143045 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:55.506 01:08:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143045 -w 256 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143045 root 20 0 20.1t 58076 26084 S 0.0 0.5 0:00.40 reactor_0' 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@48 -- # echo 143045 root 20 0 20.1t 58076 26084 S 0.0 0.5 0:00.40 reactor_0 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:55.765 01:08:29 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:55.765 01:08:29 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143045 1 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143045 1 idle 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@33 -- # local pid=143045 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143045 -w 256 00:25:55.765 01:08:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143048 root 20 0 20.1t 58076 26084 S 0.0 0.5 0:00.00 reactor_1' 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@48 -- # echo 143048 root 20 0 20.1t 58076 26084 S 0.0 0.5 0:00.00 reactor_1 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:55.765 01:08:30 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:55.765 01:08:30 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143045 2 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143045 2 idle 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@33 -- # local pid=143045 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143045 -w 256 00:25:55.765 01:08:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143049 root 20 0 20.1t 58076 26084 S 0.0 0.5 0:00.00 reactor_2' 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@48 -- # echo 143049 root 20 0 20.1t 58076 26084 S 0.0 0.5 0:00.00 reactor_2 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:56.023 01:08:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:56.023 01:08:30 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:25:56.023 01:08:30 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:25:56.023 01:08:30 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:25:56.282 [2024-11-18 01:08:30.534564] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:56.282 01:08:30 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:25:56.541 [2024-11-18 01:08:30.794589] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:25:56.541 [2024-11-18 01:08:30.795556] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:56.541 01:08:30 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:25:56.799 [2024-11-18 01:08:30.970361] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:25:56.799 [2024-11-18 01:08:30.971096] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:56.799 01:08:30 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:56.799 01:08:30 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143045 0 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143045 0 busy 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@33 -- # local pid=143045 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143045 -w 256 00:25:56.799 01:08:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:56.799 01:08:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143045 root 20 0 20.1t 58232 26084 R 99.9 0.5 0:00.77 reactor_0' 00:25:56.799 01:08:31 -- interrupt/interrupt_common.sh@48 -- # echo 143045 root 20 0 20.1t 58232 26084 R 99.9 0.5 0:00.77 reactor_0 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:56.800 01:08:31 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:56.800 01:08:31 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143045 2 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143045 2 busy 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@33 -- # local pid=143045 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143045 -w 256 00:25:56.800 01:08:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:57.057 01:08:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143049 root 20 0 20.1t 58232 26084 R 99.9 0.5 0:00.36 reactor_2' 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@48 -- # echo 143049 root 20 0 20.1t 58232 26084 R 99.9 0.5 0:00.36 reactor_2 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:57.058 01:08:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:57.058 01:08:31 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:25:57.321 [2024-11-18 01:08:31.602333] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:25:57.321 [2024-11-18 01:08:31.603140] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:57.321 01:08:31 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:25:57.321 01:08:31 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 143045 2 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143045 2 idle 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@33 -- # local pid=143045 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143045 -w 256 00:25:57.321 01:08:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143049 root 20 0 20.1t 58280 26084 S 0.0 0.5 0:00.63 reactor_2' 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@48 -- # echo 143049 root 20 0 20.1t 58280 26084 S 0.0 0.5 0:00.63 reactor_2 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:57.601 01:08:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:57.601 01:08:31 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:25:57.601 [2024-11-18 01:08:31.966383] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:25:57.601 [2024-11-18 01:08:31.967182] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:57.601 01:08:31 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:25:57.601 01:08:31 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:25:57.601 01:08:31 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:25:57.860 [2024-11-18 01:08:32.226783] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:57.860 01:08:32 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 143045 0 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143045 0 idle 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@33 -- # local pid=143045 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143045 -w 256 00:25:57.860 01:08:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143045 root 20 0 20.1t 58384 26084 S 0.0 0.5 0:01.58 reactor_0' 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@48 -- # echo 143045 root 20 0 20.1t 58384 26084 S 0.0 0.5 0:01.58 reactor_0 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:58.119 01:08:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:58.119 01:08:32 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:25:58.119 01:08:32 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:25:58.119 01:08:32 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:25:58.119 01:08:32 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 143045 00:25:58.119 01:08:32 -- common/autotest_common.sh@936 -- # '[' -z 143045 ']' 00:25:58.119 01:08:32 -- common/autotest_common.sh@940 -- # kill -0 143045 00:25:58.119 01:08:32 -- common/autotest_common.sh@941 -- # uname 00:25:58.119 01:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:58.119 01:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143045 00:25:58.119 01:08:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:58.119 01:08:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:58.119 01:08:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 143045' 00:25:58.119 killing process with pid 143045 00:25:58.119 01:08:32 -- common/autotest_common.sh@955 -- # kill 143045 00:25:58.119 01:08:32 -- common/autotest_common.sh@960 -- # wait 143045 00:25:58.686 01:08:32 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:25:58.686 01:08:32 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:58.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.686 01:08:32 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:25:58.686 01:08:32 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.686 01:08:32 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:58.686 01:08:32 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143190 00:25:58.686 01:08:32 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:58.686 01:08:32 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:58.686 01:08:32 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143190 /var/tmp/spdk.sock 00:25:58.686 01:08:32 -- common/autotest_common.sh@829 -- # '[' -z 143190 ']' 00:25:58.686 01:08:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.686 01:08:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.686 01:08:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.686 01:08:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.686 01:08:32 -- common/autotest_common.sh@10 -- # set +x 00:25:58.686 [2024-11-18 01:08:32.946410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:58.686 [2024-11-18 01:08:32.946789] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143190 ] 00:25:58.944 [2024-11-18 01:08:33.099219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:58.944 [2024-11-18 01:08:33.176044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.944 [2024-11-18 01:08:33.176183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.944 [2024-11-18 01:08:33.176187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.944 [2024-11-18 01:08:33.291713] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:59.514 01:08:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.514 01:08:33 -- common/autotest_common.sh@862 -- # return 0 00:25:59.514 01:08:33 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:25:59.514 01:08:33 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:59.774 Malloc0 00:25:59.774 Malloc1 00:25:59.774 Malloc2 00:25:59.774 01:08:34 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:25:59.774 01:08:34 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:59.774 01:08:34 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:59.774 01:08:34 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:00.035 5000+0 records in 00:26:00.035 5000+0 records out 00:26:00.035 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0344797 s, 297 MB/s 00:26:00.035 01:08:34 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:00.035 AIO0 00:26:00.035 01:08:34 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 143190 00:26:00.035 01:08:34 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 143190 00:26:00.035 01:08:34 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=143190 00:26:00.035 01:08:34 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:00.035 01:08:34 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:00.035 01:08:34 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:00.035 01:08:34 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:00.035 01:08:34 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:00.035 01:08:34 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:00.035 01:08:34 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:00.035 01:08:34 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:00.035 01:08:34 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:00.295 01:08:34 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:00.295 01:08:34 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:00.295 01:08:34 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:00.295 01:08:34 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:00.295 01:08:34 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:00.295 01:08:34 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:00.295 01:08:34 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:00.295 01:08:34 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:00.295 01:08:34 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:00.555 01:08:34 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:00.555 01:08:34 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:00.555 spdk_thread ids are 1 on reactor0. 00:26:00.555 01:08:34 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:00.555 01:08:34 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143190 0 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143190 0 idle 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@33 -- # local pid=143190 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143190 -w 256 00:26:00.555 01:08:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143190 root 20 0 20.1t 57944 25952 S 6.7 0.5 0:00.38 reactor_0' 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@48 -- # echo 143190 root 20 0 20.1t 57944 25952 S 6.7 0.5 0:00.38 reactor_0 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:00.814 01:08:35 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:00.815 01:08:35 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:00.815 01:08:35 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143190 1 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143190 1 idle 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@33 -- # local pid=143190 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143190 -w 256 00:26:00.815 01:08:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143200 root 20 0 20.1t 57944 25952 S 0.0 0.5 0:00.00 reactor_1' 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # echo 143200 root 20 0 20.1t 57944 25952 S 0.0 0.5 0:00.00 reactor_1 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:01.074 01:08:35 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:01.074 01:08:35 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143190 2 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143190 2 idle 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@33 -- # local pid=143190 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143190 -w 256 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143202 root 20 0 20.1t 57944 25952 S 0.0 0.5 0:00.00 reactor_2' 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # echo 143202 root 20 0 20.1t 57944 25952 S 0.0 0.5 0:00.00 reactor_2 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:01.074 01:08:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:01.074 01:08:35 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:01.074 01:08:35 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:01.334 [2024-11-18 01:08:35.712122] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:01.334 [2024-11-18 01:08:35.712853] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:01.334 [2024-11-18 01:08:35.713366] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:01.334 01:08:35 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:01.593 [2024-11-18 01:08:35.971960] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:01.593 [2024-11-18 01:08:35.972704] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:01.593 01:08:35 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:01.593 01:08:35 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143190 0 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143190 0 busy 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@33 -- # local pid=143190 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:01.593 01:08:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143190 -w 256 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143190 root 20 0 20.1t 58064 25952 R 99.9 0.5 0:00.83 reactor_0' 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@48 -- # echo 143190 root 20 0 20.1t 58064 25952 R 99.9 0.5 0:00.83 reactor_0 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:01.852 01:08:36 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:01.852 01:08:36 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143190 2 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143190 2 busy 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@33 -- # local pid=143190 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143190 -w 256 00:26:01.852 01:08:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143202 root 20 0 20.1t 58064 25952 R 93.8 0.5 0:00.36 reactor_2' 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@48 -- # echo 143202 root 20 0 20.1t 58064 25952 R 93.8 0.5 0:00.36 reactor_2 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:02.110 01:08:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:02.110 01:08:36 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:02.369 [2024-11-18 01:08:36.600109] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:02.369 [2024-11-18 01:08:36.600591] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:02.369 01:08:36 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:02.369 01:08:36 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 143190 2 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143190 2 idle 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@33 -- # local pid=143190 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143190 -w 256 00:26:02.369 01:08:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143202 root 20 0 20.1t 58128 25952 S 0.0 0.5 0:00.61 reactor_2' 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@48 -- # echo 143202 root 20 0 20.1t 58128 25952 S 0.0 0.5 0:00.61 reactor_2 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:02.628 01:08:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:02.628 01:08:36 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:02.888 [2024-11-18 01:08:37.048136] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:02.888 [2024-11-18 01:08:37.048793] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:02.888 [2024-11-18 01:08:37.049000] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:02.888 01:08:37 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:02.888 01:08:37 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 143190 0 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143190 0 idle 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@33 -- # local pid=143190 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143190 -w 256 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143190 root 20 0 20.1t 58184 25952 S 6.7 0.5 0:01.73 reactor_0' 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@48 -- # echo 143190 root 20 0 20.1t 58184 25952 S 6.7 0.5 0:01.73 reactor_0 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:26:02.888 01:08:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:02.888 01:08:37 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:02.888 01:08:37 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:02.888 01:08:37 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:02.888 01:08:37 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 143190 00:26:02.888 01:08:37 -- common/autotest_common.sh@936 -- # '[' -z 143190 ']' 00:26:02.888 01:08:37 -- common/autotest_common.sh@940 -- # kill -0 143190 00:26:02.888 01:08:37 -- common/autotest_common.sh@941 -- # uname 00:26:02.888 01:08:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:02.888 01:08:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143190 00:26:02.888 01:08:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:02.888 01:08:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:02.888 01:08:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 143190' 00:26:02.888 killing process with pid 143190 00:26:02.888 01:08:37 -- common/autotest_common.sh@955 -- # kill 143190 00:26:02.888 01:08:37 -- common/autotest_common.sh@960 -- # wait 143190 00:26:03.460 01:08:37 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:03.460 01:08:37 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:03.460 ************************************ 00:26:03.460 END TEST reactor_set_interrupt 00:26:03.460 ************************************ 00:26:03.460 00:26:03.460 real 0m10.548s 00:26:03.460 user 0m9.905s 00:26:03.460 sys 0m2.192s 00:26:03.460 01:08:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:03.460 01:08:37 -- common/autotest_common.sh@10 -- # set +x 00:26:03.460 01:08:37 -- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:03.460 01:08:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:03.460 01:08:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.460 01:08:37 -- common/autotest_common.sh@10 -- # set +x 00:26:03.460 ************************************ 00:26:03.460 START TEST reap_unregistered_poller 00:26:03.460 ************************************ 00:26:03.460 01:08:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:03.721 * Looking for test storage... 00:26:03.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.721 01:08:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:03.721 01:08:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:03.721 01:08:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:03.721 01:08:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:03.721 01:08:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:03.721 01:08:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:03.721 01:08:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:03.721 01:08:37 -- scripts/common.sh@335 -- # IFS=.-: 00:26:03.721 01:08:37 -- scripts/common.sh@335 -- # read -ra ver1 00:26:03.721 01:08:37 -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.721 01:08:37 -- scripts/common.sh@336 -- # read -ra ver2 00:26:03.721 01:08:37 -- scripts/common.sh@337 -- # local 'op=<' 00:26:03.721 01:08:37 -- scripts/common.sh@339 -- # ver1_l=2 00:26:03.721 01:08:37 -- scripts/common.sh@340 -- # ver2_l=1 00:26:03.721 01:08:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:03.721 01:08:37 -- scripts/common.sh@343 -- # case "$op" in 00:26:03.721 01:08:37 -- scripts/common.sh@344 -- # : 1 00:26:03.721 01:08:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:03.721 01:08:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.721 01:08:37 -- scripts/common.sh@364 -- # decimal 1 00:26:03.721 01:08:37 -- scripts/common.sh@352 -- # local d=1 00:26:03.721 01:08:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.721 01:08:37 -- scripts/common.sh@354 -- # echo 1 00:26:03.721 01:08:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:03.721 01:08:38 -- scripts/common.sh@365 -- # decimal 2 00:26:03.721 01:08:38 -- scripts/common.sh@352 -- # local d=2 00:26:03.721 01:08:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.721 01:08:38 -- scripts/common.sh@354 -- # echo 2 00:26:03.721 01:08:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:03.721 01:08:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:03.721 01:08:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:03.721 01:08:38 -- scripts/common.sh@367 -- # return 0 00:26:03.721 01:08:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.721 01:08:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.721 --rc genhtml_branch_coverage=1 00:26:03.721 --rc genhtml_function_coverage=1 00:26:03.721 --rc genhtml_legend=1 00:26:03.721 --rc geninfo_all_blocks=1 00:26:03.721 --rc geninfo_unexecuted_blocks=1 00:26:03.721 00:26:03.721 ' 00:26:03.721 01:08:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.721 --rc genhtml_branch_coverage=1 00:26:03.721 --rc genhtml_function_coverage=1 00:26:03.721 --rc genhtml_legend=1 00:26:03.721 --rc geninfo_all_blocks=1 00:26:03.721 --rc geninfo_unexecuted_blocks=1 00:26:03.721 00:26:03.721 ' 00:26:03.721 01:08:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.721 --rc genhtml_branch_coverage=1 00:26:03.721 --rc genhtml_function_coverage=1 00:26:03.721 --rc genhtml_legend=1 00:26:03.721 --rc geninfo_all_blocks=1 00:26:03.721 --rc geninfo_unexecuted_blocks=1 00:26:03.721 00:26:03.721 ' 00:26:03.721 01:08:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.721 --rc genhtml_branch_coverage=1 00:26:03.721 --rc genhtml_function_coverage=1 00:26:03.721 --rc genhtml_legend=1 00:26:03.721 --rc geninfo_all_blocks=1 00:26:03.721 --rc geninfo_unexecuted_blocks=1 00:26:03.721 00:26:03.721 ' 00:26:03.721 01:08:38 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:03.721 01:08:38 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:03.721 01:08:38 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.721 01:08:38 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.721 01:08:38 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:03.721 01:08:38 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:03.721 01:08:38 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:03.721 01:08:38 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:03.721 01:08:38 -- common/autotest_common.sh@34 -- # set -e 00:26:03.721 01:08:38 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:03.721 01:08:38 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:03.721 01:08:38 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:03.721 01:08:38 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:03.721 01:08:38 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:03.721 01:08:38 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:03.721 01:08:38 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:03.721 01:08:38 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:03.721 01:08:38 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:03.721 01:08:38 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:03.721 01:08:38 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:03.721 01:08:38 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:03.721 01:08:38 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:03.721 01:08:38 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:03.721 01:08:38 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:03.721 01:08:38 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:03.721 01:08:38 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:03.721 01:08:38 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:03.721 01:08:38 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:03.721 01:08:38 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:03.721 01:08:38 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:03.721 01:08:38 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:03.721 01:08:38 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:03.721 01:08:38 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:03.721 01:08:38 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:03.721 01:08:38 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:03.721 01:08:38 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:03.721 01:08:38 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:03.721 01:08:38 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:03.721 01:08:38 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:03.721 01:08:38 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:03.721 01:08:38 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:26:03.721 01:08:38 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:03.721 01:08:38 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:03.721 01:08:38 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:03.721 01:08:38 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:03.721 01:08:38 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:03.721 01:08:38 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:03.721 01:08:38 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:03.721 01:08:38 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:26:03.721 01:08:38 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:03.721 01:08:38 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:03.721 01:08:38 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:03.721 01:08:38 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:03.721 01:08:38 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:26:03.721 01:08:38 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:03.721 01:08:38 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:26:03.721 01:08:38 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:03.721 01:08:38 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:03.721 01:08:38 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:26:03.721 01:08:38 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:26:03.721 01:08:38 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:03.721 01:08:38 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:26:03.721 01:08:38 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:26:03.721 01:08:38 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:26:03.721 01:08:38 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:26:03.721 01:08:38 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:26:03.721 01:08:38 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:26:03.721 01:08:38 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:26:03.721 01:08:38 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:26:03.721 01:08:38 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:26:03.721 01:08:38 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:26:03.721 01:08:38 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:26:03.721 01:08:38 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:26:03.721 01:08:38 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:03.721 01:08:38 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:26:03.721 01:08:38 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:26:03.721 01:08:38 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:26:03.721 01:08:38 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:26:03.721 01:08:38 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:03.722 01:08:38 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:26:03.722 01:08:38 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:26:03.722 01:08:38 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:26:03.722 01:08:38 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:26:03.722 01:08:38 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:26:03.722 01:08:38 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:26:03.722 01:08:38 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:26:03.722 01:08:38 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:26:03.722 01:08:38 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:26:03.722 01:08:38 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:26:03.722 01:08:38 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:03.722 01:08:38 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:26:03.722 01:08:38 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:03.722 01:08:38 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:03.722 01:08:38 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:03.722 01:08:38 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:03.722 01:08:38 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:03.722 01:08:38 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:03.722 01:08:38 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:03.722 01:08:38 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:03.722 01:08:38 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:03.722 01:08:38 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:03.722 01:08:38 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:03.722 01:08:38 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:03.722 01:08:38 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:03.722 01:08:38 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:03.722 01:08:38 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:03.722 01:08:38 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:03.722 01:08:38 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:03.722 #define SPDK_CONFIG_H 00:26:03.722 #define SPDK_CONFIG_APPS 1 00:26:03.722 #define SPDK_CONFIG_ARCH native 00:26:03.722 #define SPDK_CONFIG_ASAN 1 00:26:03.722 #undef SPDK_CONFIG_AVAHI 00:26:03.722 #undef SPDK_CONFIG_CET 00:26:03.722 #define SPDK_CONFIG_COVERAGE 1 00:26:03.722 #define SPDK_CONFIG_CROSS_PREFIX 00:26:03.722 #undef SPDK_CONFIG_CRYPTO 00:26:03.722 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:03.722 #undef SPDK_CONFIG_CUSTOMOCF 00:26:03.722 #undef SPDK_CONFIG_DAOS 00:26:03.722 #define SPDK_CONFIG_DAOS_DIR 00:26:03.722 #define SPDK_CONFIG_DEBUG 1 00:26:03.722 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:03.722 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:26:03.722 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:26:03.722 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:26:03.722 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:03.722 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:03.722 #define SPDK_CONFIG_EXAMPLES 1 00:26:03.722 #undef SPDK_CONFIG_FC 00:26:03.722 #define SPDK_CONFIG_FC_PATH 00:26:03.722 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:03.722 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:03.722 #undef SPDK_CONFIG_FUSE 00:26:03.722 #undef SPDK_CONFIG_FUZZER 00:26:03.722 #define SPDK_CONFIG_FUZZER_LIB 00:26:03.722 #undef SPDK_CONFIG_GOLANG 00:26:03.722 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:03.722 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:03.722 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:03.722 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:03.722 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:03.722 #define SPDK_CONFIG_IDXD 1 00:26:03.722 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:03.722 #undef SPDK_CONFIG_IPSEC_MB 00:26:03.722 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:03.722 #define SPDK_CONFIG_ISAL 1 00:26:03.722 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:03.722 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:03.722 #define SPDK_CONFIG_LIBDIR 00:26:03.722 #undef SPDK_CONFIG_LTO 00:26:03.722 #define SPDK_CONFIG_MAX_LCORES 00:26:03.722 #define SPDK_CONFIG_NVME_CUSE 1 00:26:03.722 #undef SPDK_CONFIG_OCF 00:26:03.722 #define SPDK_CONFIG_OCF_PATH 00:26:03.722 #define SPDK_CONFIG_OPENSSL_PATH 00:26:03.722 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:03.722 #undef SPDK_CONFIG_PGO_USE 00:26:03.722 #define SPDK_CONFIG_PREFIX /usr/local 00:26:03.722 #define SPDK_CONFIG_RAID5F 1 00:26:03.722 #undef SPDK_CONFIG_RBD 00:26:03.722 #define SPDK_CONFIG_RDMA 1 00:26:03.722 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:03.722 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:03.722 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:03.722 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:03.722 #undef SPDK_CONFIG_SHARED 00:26:03.722 #undef SPDK_CONFIG_SMA 00:26:03.722 #define SPDK_CONFIG_TESTS 1 00:26:03.722 #undef SPDK_CONFIG_TSAN 00:26:03.722 #undef SPDK_CONFIG_UBLK 00:26:03.722 #define SPDK_CONFIG_UBSAN 1 00:26:03.722 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:03.722 #undef SPDK_CONFIG_URING 00:26:03.722 #define SPDK_CONFIG_URING_PATH 00:26:03.722 #undef SPDK_CONFIG_URING_ZNS 00:26:03.722 #undef SPDK_CONFIG_USDT 00:26:03.722 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:03.722 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:03.722 #undef SPDK_CONFIG_VFIO_USER 00:26:03.722 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:03.722 #define SPDK_CONFIG_VHOST 1 00:26:03.722 #define SPDK_CONFIG_VIRTIO 1 00:26:03.722 #undef SPDK_CONFIG_VTUNE 00:26:03.722 #define SPDK_CONFIG_VTUNE_DIR 00:26:03.722 #define SPDK_CONFIG_WERROR 1 00:26:03.722 #define SPDK_CONFIG_WPDK_DIR 00:26:03.722 #undef SPDK_CONFIG_XNVME 00:26:03.722 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:03.722 01:08:38 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:03.722 01:08:38 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:03.722 01:08:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.722 01:08:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.722 01:08:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.722 01:08:38 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:03.722 01:08:38 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:03.722 01:08:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:03.722 01:08:38 -- paths/export.sh@5 -- # export PATH 00:26:03.722 01:08:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:03.722 01:08:38 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:03.722 01:08:38 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:03.722 01:08:38 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:03.722 01:08:38 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:03.722 01:08:38 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:03.722 01:08:38 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:03.722 01:08:38 -- pm/common@16 -- # TEST_TAG=N/A 00:26:03.722 01:08:38 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:03.722 01:08:38 -- common/autotest_common.sh@52 -- # : 1 00:26:03.722 01:08:38 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:03.722 01:08:38 -- common/autotest_common.sh@56 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:03.722 01:08:38 -- common/autotest_common.sh@58 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:03.722 01:08:38 -- common/autotest_common.sh@60 -- # : 1 00:26:03.722 01:08:38 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:03.722 01:08:38 -- common/autotest_common.sh@62 -- # : 1 00:26:03.722 01:08:38 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:03.722 01:08:38 -- common/autotest_common.sh@64 -- # : 00:26:03.722 01:08:38 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:03.722 01:08:38 -- common/autotest_common.sh@66 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:03.722 01:08:38 -- common/autotest_common.sh@68 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:03.722 01:08:38 -- common/autotest_common.sh@70 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:03.722 01:08:38 -- common/autotest_common.sh@72 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:03.722 01:08:38 -- common/autotest_common.sh@74 -- # : 1 00:26:03.722 01:08:38 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:03.722 01:08:38 -- common/autotest_common.sh@76 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:03.722 01:08:38 -- common/autotest_common.sh@78 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:03.722 01:08:38 -- common/autotest_common.sh@80 -- # : 0 00:26:03.722 01:08:38 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:03.722 01:08:38 -- common/autotest_common.sh@82 -- # : 0 00:26:03.723 01:08:38 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:03.723 01:08:38 -- common/autotest_common.sh@84 -- # : 0 00:26:03.723 01:08:38 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:03.723 01:08:38 -- common/autotest_common.sh@86 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:03.983 01:08:38 -- common/autotest_common.sh@88 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:03.983 01:08:38 -- common/autotest_common.sh@90 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:03.983 01:08:38 -- common/autotest_common.sh@92 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:03.983 01:08:38 -- common/autotest_common.sh@94 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:03.983 01:08:38 -- common/autotest_common.sh@96 -- # : rdma 00:26:03.983 01:08:38 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:03.983 01:08:38 -- common/autotest_common.sh@98 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:03.983 01:08:38 -- common/autotest_common.sh@100 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:03.983 01:08:38 -- common/autotest_common.sh@102 -- # : 1 00:26:03.983 01:08:38 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:03.983 01:08:38 -- common/autotest_common.sh@104 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:03.983 01:08:38 -- common/autotest_common.sh@106 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:03.983 01:08:38 -- common/autotest_common.sh@108 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:03.983 01:08:38 -- common/autotest_common.sh@110 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:03.983 01:08:38 -- common/autotest_common.sh@112 -- # : 0 00:26:03.983 01:08:38 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:03.983 01:08:38 -- common/autotest_common.sh@114 -- # : 1 00:26:03.983 01:08:38 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:03.984 01:08:38 -- common/autotest_common.sh@116 -- # : 1 00:26:03.984 01:08:38 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:03.984 01:08:38 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:26:03.984 01:08:38 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:03.984 01:08:38 -- common/autotest_common.sh@120 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:03.984 01:08:38 -- common/autotest_common.sh@122 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:03.984 01:08:38 -- common/autotest_common.sh@124 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:03.984 01:08:38 -- common/autotest_common.sh@126 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:03.984 01:08:38 -- common/autotest_common.sh@128 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:03.984 01:08:38 -- common/autotest_common.sh@130 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:03.984 01:08:38 -- common/autotest_common.sh@132 -- # : v22.11.4 00:26:03.984 01:08:38 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:03.984 01:08:38 -- common/autotest_common.sh@134 -- # : true 00:26:03.984 01:08:38 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:03.984 01:08:38 -- common/autotest_common.sh@136 -- # : 1 00:26:03.984 01:08:38 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:03.984 01:08:38 -- common/autotest_common.sh@138 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:03.984 01:08:38 -- common/autotest_common.sh@140 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:03.984 01:08:38 -- common/autotest_common.sh@142 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:03.984 01:08:38 -- common/autotest_common.sh@144 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:03.984 01:08:38 -- common/autotest_common.sh@146 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:03.984 01:08:38 -- common/autotest_common.sh@148 -- # : 00:26:03.984 01:08:38 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:03.984 01:08:38 -- common/autotest_common.sh@150 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:03.984 01:08:38 -- common/autotest_common.sh@152 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:03.984 01:08:38 -- common/autotest_common.sh@154 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:03.984 01:08:38 -- common/autotest_common.sh@156 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:03.984 01:08:38 -- common/autotest_common.sh@158 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:03.984 01:08:38 -- common/autotest_common.sh@160 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:03.984 01:08:38 -- common/autotest_common.sh@163 -- # : 00:26:03.984 01:08:38 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:03.984 01:08:38 -- common/autotest_common.sh@165 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:03.984 01:08:38 -- common/autotest_common.sh@167 -- # : 0 00:26:03.984 01:08:38 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:03.984 01:08:38 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:03.984 01:08:38 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:03.984 01:08:38 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:03.984 01:08:38 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:03.984 01:08:38 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:03.984 01:08:38 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:03.984 01:08:38 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:03.984 01:08:38 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:03.984 01:08:38 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:03.984 01:08:38 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:03.984 01:08:38 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:03.984 01:08:38 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:03.984 01:08:38 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:03.984 01:08:38 -- common/autotest_common.sh@196 -- # cat 00:26:03.984 01:08:38 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:03.984 01:08:38 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:03.984 01:08:38 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:03.985 01:08:38 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:03.985 01:08:38 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:03.985 01:08:38 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:03.985 01:08:38 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:03.985 01:08:38 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:03.985 01:08:38 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:03.985 01:08:38 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:03.985 01:08:38 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:03.985 01:08:38 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:03.985 01:08:38 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:03.985 01:08:38 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:03.985 01:08:38 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:03.985 01:08:38 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:03.985 01:08:38 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:03.985 01:08:38 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:03.985 01:08:38 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:03.985 01:08:38 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:26:03.985 01:08:38 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:26:03.985 01:08:38 -- common/autotest_common.sh@249 -- # _LCOV= 00:26:03.985 01:08:38 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:26:03.985 01:08:38 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:26:03.985 01:08:38 -- common/autotest_common.sh@255 -- # lcov_opt= 00:26:03.985 01:08:38 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:26:03.985 01:08:38 -- common/autotest_common.sh@259 -- # export valgrind= 00:26:03.985 01:08:38 -- common/autotest_common.sh@259 -- # valgrind= 00:26:03.985 01:08:38 -- common/autotest_common.sh@265 -- # uname -s 00:26:03.985 01:08:38 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:26:03.985 01:08:38 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:26:03.985 01:08:38 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:26:03.985 01:08:38 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:26:03.985 01:08:38 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@275 -- # MAKE=make 00:26:03.985 01:08:38 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:26:03.985 01:08:38 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:26:03.985 01:08:38 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:26:03.985 01:08:38 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:03.985 01:08:38 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:26:03.985 01:08:38 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:26:03.985 01:08:38 -- common/autotest_common.sh@319 -- # [[ -z 143362 ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@319 -- # kill -0 143362 00:26:03.985 01:08:38 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:26:03.985 01:08:38 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:26:03.985 01:08:38 -- common/autotest_common.sh@332 -- # local mount target_dir 00:26:03.985 01:08:38 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:26:03.985 01:08:38 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:26:03.985 01:08:38 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:26:03.985 01:08:38 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:26:03.985 01:08:38 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.Z4bI6c 00:26:03.985 01:08:38 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:03.985 01:08:38 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:26:03.985 01:08:38 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.Z4bI6c/tests/interrupt /tmp/spdk.Z4bI6c 00:26:03.985 01:08:38 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:26:03.985 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.985 01:08:38 -- common/autotest_common.sh@328 -- # df -T 00:26:03.985 01:08:38 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200 00:26:03.985 01:08:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=4726784 00:26:03.985 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=9433800704 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:26:03.985 01:08:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=11166216192 00:26:03.985 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267146240 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268403712 00:26:03.985 01:08:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:26:03.985 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:26:03.985 01:08:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:26:03.985 01:08:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:26:03.985 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:26:03.985 01:08:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:26:03.986 01:08:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:26:03.986 01:08:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:26:03.986 01:08:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:26:03.986 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.986 01:08:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:03.986 01:08:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:03.986 01:08:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008 00:26:03.986 01:08:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:26:03.986 01:08:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:26:03.986 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.986 01:08:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:26:03.986 01:08:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:26:03.986 01:08:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=97108201472 00:26:03.986 01:08:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:26:03.986 01:08:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=2594578432 00:26:03.986 01:08:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:03.986 01:08:38 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:26:03.986 * Looking for test storage... 00:26:03.986 01:08:38 -- common/autotest_common.sh@369 -- # local target_space new_size 00:26:03.986 01:08:38 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:26:03.986 01:08:38 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:03.986 01:08:38 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.986 01:08:38 -- common/autotest_common.sh@373 -- # mount=/ 00:26:03.986 01:08:38 -- common/autotest_common.sh@375 -- # target_space=9433800704 00:26:03.986 01:08:38 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:26:03.986 01:08:38 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:26:03.986 01:08:38 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:26:03.986 01:08:38 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:26:03.986 01:08:38 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:26:03.986 01:08:38 -- common/autotest_common.sh@382 -- # new_size=13380808704 00:26:03.986 01:08:38 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:03.986 01:08:38 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.986 01:08:38 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.986 01:08:38 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:03.986 01:08:38 -- common/autotest_common.sh@390 -- # return 0 00:26:03.986 01:08:38 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:26:03.986 01:08:38 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:26:03.986 01:08:38 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:03.986 01:08:38 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:03.986 01:08:38 -- common/autotest_common.sh@1682 -- # true 00:26:03.986 01:08:38 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:26:03.986 01:08:38 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:03.986 01:08:38 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:03.986 01:08:38 -- common/autotest_common.sh@27 -- # exec 00:26:03.986 01:08:38 -- common/autotest_common.sh@29 -- # exec 00:26:03.986 01:08:38 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:03.986 01:08:38 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:03.986 01:08:38 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:03.986 01:08:38 -- common/autotest_common.sh@18 -- # set -x 00:26:03.986 01:08:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:03.986 01:08:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:03.986 01:08:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:03.986 01:08:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:03.986 01:08:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:03.986 01:08:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:03.986 01:08:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:03.986 01:08:38 -- scripts/common.sh@335 -- # IFS=.-: 00:26:03.986 01:08:38 -- scripts/common.sh@335 -- # read -ra ver1 00:26:03.986 01:08:38 -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.986 01:08:38 -- scripts/common.sh@336 -- # read -ra ver2 00:26:03.986 01:08:38 -- scripts/common.sh@337 -- # local 'op=<' 00:26:03.986 01:08:38 -- scripts/common.sh@339 -- # ver1_l=2 00:26:03.986 01:08:38 -- scripts/common.sh@340 -- # ver2_l=1 00:26:03.986 01:08:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:03.986 01:08:38 -- scripts/common.sh@343 -- # case "$op" in 00:26:03.986 01:08:38 -- scripts/common.sh@344 -- # : 1 00:26:03.986 01:08:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:03.986 01:08:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.986 01:08:38 -- scripts/common.sh@364 -- # decimal 1 00:26:03.986 01:08:38 -- scripts/common.sh@352 -- # local d=1 00:26:03.986 01:08:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.986 01:08:38 -- scripts/common.sh@354 -- # echo 1 00:26:03.986 01:08:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:03.986 01:08:38 -- scripts/common.sh@365 -- # decimal 2 00:26:03.986 01:08:38 -- scripts/common.sh@352 -- # local d=2 00:26:03.986 01:08:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.986 01:08:38 -- scripts/common.sh@354 -- # echo 2 00:26:03.986 01:08:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:03.986 01:08:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:03.986 01:08:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:03.986 01:08:38 -- scripts/common.sh@367 -- # return 0 00:26:03.986 01:08:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.986 01:08:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:03.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.986 --rc genhtml_branch_coverage=1 00:26:03.986 --rc genhtml_function_coverage=1 00:26:03.986 --rc genhtml_legend=1 00:26:03.986 --rc geninfo_all_blocks=1 00:26:03.986 --rc geninfo_unexecuted_blocks=1 00:26:03.986 00:26:03.987 ' 00:26:03.987 01:08:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:03.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.987 --rc genhtml_branch_coverage=1 00:26:03.987 --rc genhtml_function_coverage=1 00:26:03.987 --rc genhtml_legend=1 00:26:03.987 --rc geninfo_all_blocks=1 00:26:03.987 --rc geninfo_unexecuted_blocks=1 00:26:03.987 00:26:03.987 ' 00:26:03.987 01:08:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:03.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.987 --rc genhtml_branch_coverage=1 00:26:03.987 --rc genhtml_function_coverage=1 00:26:03.987 --rc genhtml_legend=1 00:26:03.987 --rc geninfo_all_blocks=1 00:26:03.987 --rc geninfo_unexecuted_blocks=1 00:26:03.987 00:26:03.987 ' 00:26:03.987 01:08:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:03.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.987 --rc genhtml_branch_coverage=1 00:26:03.987 --rc genhtml_function_coverage=1 00:26:03.987 --rc genhtml_legend=1 00:26:03.987 --rc geninfo_all_blocks=1 00:26:03.987 --rc geninfo_unexecuted_blocks=1 00:26:03.987 00:26:03.987 ' 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:03.987 01:08:38 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:03.987 01:08:38 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:03.987 01:08:38 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143428 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:03.987 01:08:38 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143428 /var/tmp/spdk.sock 00:26:03.987 01:08:38 -- common/autotest_common.sh@829 -- # '[' -z 143428 ']' 00:26:03.987 01:08:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.987 01:08:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.987 01:08:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.987 01:08:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.987 01:08:38 -- common/autotest_common.sh@10 -- # set +x 00:26:03.987 [2024-11-18 01:08:38.369168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:03.987 [2024-11-18 01:08:38.369431] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143428 ] 00:26:04.254 [2024-11-18 01:08:38.532179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:04.254 [2024-11-18 01:08:38.605982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.254 [2024-11-18 01:08:38.606122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.254 [2024-11-18 01:08:38.606464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.527 [2024-11-18 01:08:38.721920] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:05.098 01:08:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.098 01:08:39 -- common/autotest_common.sh@862 -- # return 0 00:26:05.098 01:08:39 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:05.098 01:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.098 01:08:39 -- common/autotest_common.sh@10 -- # set +x 00:26:05.098 01:08:39 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:05.098 01:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.098 01:08:39 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:05.098 "name": "app_thread", 00:26:05.098 "id": 1, 00:26:05.098 "active_pollers": [], 00:26:05.098 "timed_pollers": [ 00:26:05.098 { 00:26:05.098 "name": "rpc_subsystem_poll", 00:26:05.098 "id": 1, 00:26:05.098 "state": "waiting", 00:26:05.098 "run_count": 0, 00:26:05.098 "busy_count": 0, 00:26:05.098 "period_ticks": 8400000 00:26:05.098 } 00:26:05.098 ], 00:26:05.098 "paused_pollers": [] 00:26:05.098 }' 00:26:05.099 01:08:39 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:05.099 01:08:39 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:05.099 01:08:39 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:05.099 01:08:39 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:05.099 01:08:39 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:26:05.099 01:08:39 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:05.099 01:08:39 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:05.099 01:08:39 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:05.099 01:08:39 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:05.099 5000+0 records in 00:26:05.099 5000+0 records out 00:26:05.099 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0320316 s, 320 MB/s 00:26:05.099 01:08:39 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:05.356 AIO0 00:26:05.356 01:08:39 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:05.615 01:08:39 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:05.874 01:08:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.874 01:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:05.874 01:08:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:05.874 "name": "app_thread", 00:26:05.874 "id": 1, 00:26:05.874 "active_pollers": [], 00:26:05.874 "timed_pollers": [ 00:26:05.874 { 00:26:05.874 "name": "rpc_subsystem_poll", 00:26:05.874 "id": 1, 00:26:05.874 "state": "waiting", 00:26:05.874 "run_count": 0, 00:26:05.874 "busy_count": 0, 00:26:05.874 "period_ticks": 8400000 00:26:05.874 } 00:26:05.874 ], 00:26:05.874 "paused_pollers": [] 00:26:05.874 }' 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:05.874 01:08:40 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 143428 00:26:05.874 01:08:40 -- common/autotest_common.sh@936 -- # '[' -z 143428 ']' 00:26:05.874 01:08:40 -- common/autotest_common.sh@940 -- # kill -0 143428 00:26:05.874 01:08:40 -- common/autotest_common.sh@941 -- # uname 00:26:05.874 01:08:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:05.874 01:08:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143428 00:26:05.874 01:08:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:05.874 01:08:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:05.874 killing process with pid 143428 00:26:05.874 01:08:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 143428' 00:26:05.874 01:08:40 -- common/autotest_common.sh@955 -- # kill 143428 00:26:05.874 01:08:40 -- common/autotest_common.sh@960 -- # wait 143428 00:26:06.445 01:08:40 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:06.445 01:08:40 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:06.445 00:26:06.445 real 0m2.841s 00:26:06.445 user 0m1.788s 00:26:06.445 sys 0m0.732s 00:26:06.445 01:08:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:06.445 01:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:06.445 ************************************ 00:26:06.445 END TEST reap_unregistered_poller 00:26:06.446 ************************************ 00:26:06.446 01:08:40 -- spdk/autotest.sh@191 -- # uname -s 00:26:06.446 01:08:40 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:26:06.446 01:08:40 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:26:06.446 01:08:40 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:26:06.446 01:08:40 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:06.446 01:08:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:06.446 01:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.446 01:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:06.446 ************************************ 00:26:06.446 START TEST spdk_dd 00:26:06.446 ************************************ 00:26:06.446 01:08:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:06.446 * Looking for test storage... 00:26:06.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:06.446 01:08:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:06.446 01:08:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:06.446 01:08:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:06.771 01:08:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:06.771 01:08:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:06.771 01:08:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:06.771 01:08:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:06.771 01:08:40 -- scripts/common.sh@335 -- # IFS=.-: 00:26:06.771 01:08:40 -- scripts/common.sh@335 -- # read -ra ver1 00:26:06.771 01:08:40 -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.771 01:08:40 -- scripts/common.sh@336 -- # read -ra ver2 00:26:06.771 01:08:40 -- scripts/common.sh@337 -- # local 'op=<' 00:26:06.772 01:08:40 -- scripts/common.sh@339 -- # ver1_l=2 00:26:06.772 01:08:40 -- scripts/common.sh@340 -- # ver2_l=1 00:26:06.772 01:08:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:06.772 01:08:40 -- scripts/common.sh@343 -- # case "$op" in 00:26:06.772 01:08:40 -- scripts/common.sh@344 -- # : 1 00:26:06.772 01:08:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:06.772 01:08:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.772 01:08:40 -- scripts/common.sh@364 -- # decimal 1 00:26:06.772 01:08:40 -- scripts/common.sh@352 -- # local d=1 00:26:06.772 01:08:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.772 01:08:40 -- scripts/common.sh@354 -- # echo 1 00:26:06.772 01:08:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:06.772 01:08:40 -- scripts/common.sh@365 -- # decimal 2 00:26:06.772 01:08:40 -- scripts/common.sh@352 -- # local d=2 00:26:06.772 01:08:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.772 01:08:40 -- scripts/common.sh@354 -- # echo 2 00:26:06.772 01:08:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:06.772 01:08:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:06.772 01:08:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:06.772 01:08:40 -- scripts/common.sh@367 -- # return 0 00:26:06.772 01:08:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.772 01:08:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:06.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.772 --rc genhtml_branch_coverage=1 00:26:06.772 --rc genhtml_function_coverage=1 00:26:06.772 --rc genhtml_legend=1 00:26:06.772 --rc geninfo_all_blocks=1 00:26:06.772 --rc geninfo_unexecuted_blocks=1 00:26:06.772 00:26:06.772 ' 00:26:06.772 01:08:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:06.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.772 --rc genhtml_branch_coverage=1 00:26:06.772 --rc genhtml_function_coverage=1 00:26:06.772 --rc genhtml_legend=1 00:26:06.772 --rc geninfo_all_blocks=1 00:26:06.772 --rc geninfo_unexecuted_blocks=1 00:26:06.772 00:26:06.772 ' 00:26:06.772 01:08:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:06.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.772 --rc genhtml_branch_coverage=1 00:26:06.772 --rc genhtml_function_coverage=1 00:26:06.772 --rc genhtml_legend=1 00:26:06.772 --rc geninfo_all_blocks=1 00:26:06.772 --rc geninfo_unexecuted_blocks=1 00:26:06.772 00:26:06.772 ' 00:26:06.772 01:08:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:06.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.772 --rc genhtml_branch_coverage=1 00:26:06.772 --rc genhtml_function_coverage=1 00:26:06.772 --rc genhtml_legend=1 00:26:06.772 --rc geninfo_all_blocks=1 00:26:06.772 --rc geninfo_unexecuted_blocks=1 00:26:06.772 00:26:06.772 ' 00:26:06.772 01:08:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:06.772 01:08:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.772 01:08:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.772 01:08:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.772 01:08:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.772 01:08:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.772 01:08:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.772 01:08:40 -- paths/export.sh@5 -- # export PATH 00:26:06.772 01:08:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.772 01:08:40 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:07.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:07.031 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:08.939 01:08:43 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:26:08.939 01:08:43 -- dd/dd.sh@11 -- # nvme_in_userspace 00:26:08.939 01:08:43 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:08.939 01:08:43 -- scripts/common.sh@312 -- # local nvmes 00:26:08.939 01:08:43 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:08.939 01:08:43 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:08.939 01:08:43 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:08.939 01:08:43 -- scripts/common.sh@297 -- # local bdf= 00:26:08.939 01:08:43 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:08.939 01:08:43 -- scripts/common.sh@232 -- # local class 00:26:08.939 01:08:43 -- scripts/common.sh@233 -- # local subclass 00:26:08.939 01:08:43 -- scripts/common.sh@234 -- # local progif 00:26:08.939 01:08:43 -- scripts/common.sh@235 -- # printf %02x 1 00:26:08.939 01:08:43 -- scripts/common.sh@235 -- # class=01 00:26:08.939 01:08:43 -- scripts/common.sh@236 -- # printf %02x 8 00:26:08.939 01:08:43 -- scripts/common.sh@236 -- # subclass=08 00:26:08.939 01:08:43 -- scripts/common.sh@237 -- # printf %02x 2 00:26:08.939 01:08:43 -- scripts/common.sh@237 -- # progif=02 00:26:08.939 01:08:43 -- scripts/common.sh@239 -- # hash lspci 00:26:08.939 01:08:43 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:08.939 01:08:43 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:08.939 01:08:43 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:08.939 01:08:43 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:08.939 01:08:43 -- scripts/common.sh@244 -- # tr -d '"' 00:26:08.939 01:08:43 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:08.939 01:08:43 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:08.939 01:08:43 -- scripts/common.sh@15 -- # local i 00:26:08.939 01:08:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:08.939 01:08:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:08.939 01:08:43 -- scripts/common.sh@24 -- # return 0 00:26:08.939 01:08:43 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:08.939 01:08:43 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:08.939 01:08:43 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:08.939 01:08:43 -- scripts/common.sh@322 -- # uname -s 00:26:08.939 01:08:43 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:08.939 01:08:43 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:08.939 01:08:43 -- scripts/common.sh@327 -- # (( 1 )) 00:26:08.939 01:08:43 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:26:08.939 01:08:43 -- dd/dd.sh@13 -- # check_liburing 00:26:08.939 01:08:43 -- dd/common.sh@139 -- # local lib so 00:26:08.939 01:08:43 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:26:08.939 01:08:43 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:26:08.939 01:08:43 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:08.939 01:08:43 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:26:08.940 01:08:43 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:08.940 01:08:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:08.940 01:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:08.940 01:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.940 ************************************ 00:26:08.940 START TEST spdk_dd_basic_rw 00:26:08.940 ************************************ 00:26:08.940 01:08:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:08.940 * Looking for test storage... 00:26:08.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:08.940 01:08:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:08.940 01:08:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:08.940 01:08:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:08.940 01:08:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:08.940 01:08:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:08.940 01:08:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:08.940 01:08:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:08.940 01:08:43 -- scripts/common.sh@335 -- # IFS=.-: 00:26:08.940 01:08:43 -- scripts/common.sh@335 -- # read -ra ver1 00:26:08.940 01:08:43 -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.940 01:08:43 -- scripts/common.sh@336 -- # read -ra ver2 00:26:08.940 01:08:43 -- scripts/common.sh@337 -- # local 'op=<' 00:26:08.940 01:08:43 -- scripts/common.sh@339 -- # ver1_l=2 00:26:08.940 01:08:43 -- scripts/common.sh@340 -- # ver2_l=1 00:26:08.940 01:08:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:08.940 01:08:43 -- scripts/common.sh@343 -- # case "$op" in 00:26:08.940 01:08:43 -- scripts/common.sh@344 -- # : 1 00:26:08.940 01:08:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:08.940 01:08:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.940 01:08:43 -- scripts/common.sh@364 -- # decimal 1 00:26:08.940 01:08:43 -- scripts/common.sh@352 -- # local d=1 00:26:08.940 01:08:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.940 01:08:43 -- scripts/common.sh@354 -- # echo 1 00:26:08.940 01:08:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:08.940 01:08:43 -- scripts/common.sh@365 -- # decimal 2 00:26:08.940 01:08:43 -- scripts/common.sh@352 -- # local d=2 00:26:08.940 01:08:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.940 01:08:43 -- scripts/common.sh@354 -- # echo 2 00:26:08.940 01:08:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:08.940 01:08:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:08.940 01:08:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:08.940 01:08:43 -- scripts/common.sh@367 -- # return 0 00:26:08.940 01:08:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.940 01:08:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:08.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.940 --rc genhtml_branch_coverage=1 00:26:08.940 --rc genhtml_function_coverage=1 00:26:08.940 --rc genhtml_legend=1 00:26:08.940 --rc geninfo_all_blocks=1 00:26:08.940 --rc geninfo_unexecuted_blocks=1 00:26:08.940 00:26:08.940 ' 00:26:08.940 01:08:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:08.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.940 --rc genhtml_branch_coverage=1 00:26:08.940 --rc genhtml_function_coverage=1 00:26:08.940 --rc genhtml_legend=1 00:26:08.940 --rc geninfo_all_blocks=1 00:26:08.940 --rc geninfo_unexecuted_blocks=1 00:26:08.940 00:26:08.940 ' 00:26:08.940 01:08:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:08.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.940 --rc genhtml_branch_coverage=1 00:26:08.940 --rc genhtml_function_coverage=1 00:26:08.940 --rc genhtml_legend=1 00:26:08.940 --rc geninfo_all_blocks=1 00:26:08.940 --rc geninfo_unexecuted_blocks=1 00:26:08.940 00:26:08.940 ' 00:26:08.940 01:08:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:08.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.940 --rc genhtml_branch_coverage=1 00:26:08.940 --rc genhtml_function_coverage=1 00:26:08.940 --rc genhtml_legend=1 00:26:08.940 --rc geninfo_all_blocks=1 00:26:08.940 --rc geninfo_unexecuted_blocks=1 00:26:08.940 00:26:08.940 ' 00:26:08.940 01:08:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:08.940 01:08:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.940 01:08:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.940 01:08:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.940 01:08:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:08.940 01:08:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:08.940 01:08:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:08.940 01:08:43 -- paths/export.sh@5 -- # export PATH 00:26:08.940 01:08:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:08.940 01:08:43 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:26:08.940 01:08:43 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:26:08.940 01:08:43 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:26:08.940 01:08:43 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:26:08.940 01:08:43 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:26:08.940 01:08:43 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:08.940 01:08:43 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:26:08.940 01:08:43 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:08.940 01:08:43 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:08.940 01:08:43 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:26:08.940 01:08:43 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:26:08.940 01:08:43 -- dd/common.sh@126 -- # mapfile -t id 00:26:08.940 01:08:43 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:26:09.511 01:08:43 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 103 Data Units Written: 7 Host Read Commands: 2213 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:26:09.511 01:08:43 -- dd/common.sh@130 -- # lbaf=04 00:26:09.512 01:08:43 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 103 Data Units Written: 7 Host Read Commands: 2213 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:26:09.512 01:08:43 -- dd/common.sh@132 -- # lbaf=4096 00:26:09.512 01:08:43 -- dd/common.sh@134 -- # echo 4096 00:26:09.512 01:08:43 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:26:09.512 01:08:43 -- dd/basic_rw.sh@96 -- # : 00:26:09.512 01:08:43 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:09.512 01:08:43 -- dd/basic_rw.sh@96 -- # gen_conf 00:26:09.512 01:08:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:26:09.512 01:08:43 -- dd/common.sh@31 -- # xtrace_disable 00:26:09.512 01:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.512 01:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:09.512 01:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:09.512 ************************************ 00:26:09.512 START TEST dd_bs_lt_native_bs 00:26:09.512 ************************************ 00:26:09.512 01:08:43 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:09.512 01:08:43 -- common/autotest_common.sh@650 -- # local es=0 00:26:09.512 01:08:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:09.512 01:08:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.512 01:08:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.512 01:08:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.512 01:08:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.512 01:08:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.512 01:08:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.512 01:08:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.512 01:08:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:09.512 01:08:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:09.512 { 00:26:09.512 "subsystems": [ 00:26:09.512 { 00:26:09.512 "subsystem": "bdev", 00:26:09.512 "config": [ 00:26:09.512 { 00:26:09.512 "params": { 00:26:09.512 "trtype": "pcie", 00:26:09.512 "traddr": "0000:00:06.0", 00:26:09.512 "name": "Nvme0" 00:26:09.512 }, 00:26:09.512 "method": "bdev_nvme_attach_controller" 00:26:09.512 }, 00:26:09.512 { 00:26:09.512 "method": "bdev_wait_for_examine" 00:26:09.512 } 00:26:09.512 ] 00:26:09.512 } 00:26:09.512 ] 00:26:09.512 } 00:26:09.512 [2024-11-18 01:08:43.701226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:09.512 [2024-11-18 01:08:43.701437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143751 ] 00:26:09.512 [2024-11-18 01:08:43.844398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.771 [2024-11-18 01:08:43.925509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.771 [2024-11-18 01:08:44.113550] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:26:09.771 [2024-11-18 01:08:44.113682] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:10.030 [2024-11-18 01:08:44.303628] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:10.290 01:08:44 -- common/autotest_common.sh@653 -- # es=234 00:26:10.290 01:08:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:10.290 01:08:44 -- common/autotest_common.sh@662 -- # es=106 00:26:10.290 01:08:44 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:10.290 01:08:44 -- common/autotest_common.sh@670 -- # es=1 00:26:10.290 01:08:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:10.290 00:26:10.290 real 0m0.882s 00:26:10.290 user 0m0.570s 00:26:10.290 sys 0m0.270s 00:26:10.290 01:08:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:10.290 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:10.290 ************************************ 00:26:10.290 END TEST dd_bs_lt_native_bs 00:26:10.290 ************************************ 00:26:10.290 01:08:44 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:26:10.290 01:08:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:10.290 01:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:10.290 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:10.290 ************************************ 00:26:10.290 START TEST dd_rw 00:26:10.290 ************************************ 00:26:10.290 01:08:44 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:26:10.290 01:08:44 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:26:10.290 01:08:44 -- dd/basic_rw.sh@12 -- # local count size 00:26:10.290 01:08:44 -- dd/basic_rw.sh@13 -- # local qds bss 00:26:10.290 01:08:44 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:26:10.290 01:08:44 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:10.290 01:08:44 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:10.290 01:08:44 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:10.290 01:08:44 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:10.290 01:08:44 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:10.290 01:08:44 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:10.290 01:08:44 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:10.290 01:08:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:10.290 01:08:44 -- dd/basic_rw.sh@23 -- # count=15 00:26:10.290 01:08:44 -- dd/basic_rw.sh@24 -- # count=15 00:26:10.290 01:08:44 -- dd/basic_rw.sh@25 -- # size=61440 00:26:10.290 01:08:44 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:10.290 01:08:44 -- dd/common.sh@98 -- # xtrace_disable 00:26:10.291 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:10.859 01:08:45 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:26:10.859 01:08:45 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:10.859 01:08:45 -- dd/common.sh@31 -- # xtrace_disable 00:26:10.859 01:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:10.859 [2024-11-18 01:08:45.204287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:10.859 [2024-11-18 01:08:45.204480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143796 ] 00:26:10.859 { 00:26:10.859 "subsystems": [ 00:26:10.859 { 00:26:10.859 "subsystem": "bdev", 00:26:10.859 "config": [ 00:26:10.859 { 00:26:10.859 "params": { 00:26:10.859 "trtype": "pcie", 00:26:10.859 "traddr": "0000:00:06.0", 00:26:10.859 "name": "Nvme0" 00:26:10.859 }, 00:26:10.859 "method": "bdev_nvme_attach_controller" 00:26:10.859 }, 00:26:10.859 { 00:26:10.859 "method": "bdev_wait_for_examine" 00:26:10.859 } 00:26:10.859 ] 00:26:10.859 } 00:26:10.859 ] 00:26:10.859 } 00:26:11.118 [2024-11-18 01:08:45.347985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.118 [2024-11-18 01:08:45.421884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.377  [2024-11-18T01:08:46.037Z] Copying: 60/60 [kB] (average 19 MBps) 00:26:11.638 00:26:11.912 01:08:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:26:11.912 01:08:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:11.912 01:08:46 -- dd/common.sh@31 -- # xtrace_disable 00:26:11.912 01:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:11.912 [2024-11-18 01:08:46.087842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:11.912 [2024-11-18 01:08:46.088017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143814 ] 00:26:11.912 { 00:26:11.912 "subsystems": [ 00:26:11.912 { 00:26:11.912 "subsystem": "bdev", 00:26:11.912 "config": [ 00:26:11.912 { 00:26:11.912 "params": { 00:26:11.912 "trtype": "pcie", 00:26:11.912 "traddr": "0000:00:06.0", 00:26:11.912 "name": "Nvme0" 00:26:11.912 }, 00:26:11.912 "method": "bdev_nvme_attach_controller" 00:26:11.912 }, 00:26:11.912 { 00:26:11.912 "method": "bdev_wait_for_examine" 00:26:11.912 } 00:26:11.912 ] 00:26:11.912 } 00:26:11.912 ] 00:26:11.912 } 00:26:11.912 [2024-11-18 01:08:46.229186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.184 [2024-11-18 01:08:46.306441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.184  [2024-11-18T01:08:47.157Z] Copying: 60/60 [kB] (average 19 MBps) 00:26:12.758 00:26:12.758 01:08:46 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:12.758 01:08:46 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:12.758 01:08:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:12.758 01:08:46 -- dd/common.sh@11 -- # local nvme_ref= 00:26:12.758 01:08:46 -- dd/common.sh@12 -- # local size=61440 00:26:12.758 01:08:46 -- dd/common.sh@14 -- # local bs=1048576 00:26:12.758 01:08:46 -- dd/common.sh@15 -- # local count=1 00:26:12.758 01:08:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:12.758 01:08:46 -- dd/common.sh@18 -- # gen_conf 00:26:12.758 01:08:46 -- dd/common.sh@31 -- # xtrace_disable 00:26:12.758 01:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.758 { 00:26:12.758 "subsystems": [ 00:26:12.758 { 00:26:12.758 "subsystem": "bdev", 00:26:12.758 "config": [ 00:26:12.758 { 00:26:12.758 "params": { 00:26:12.758 "trtype": "pcie", 00:26:12.758 "traddr": "0000:00:06.0", 00:26:12.758 "name": "Nvme0" 00:26:12.758 }, 00:26:12.758 "method": "bdev_nvme_attach_controller" 00:26:12.758 }, 00:26:12.758 { 00:26:12.758 "method": "bdev_wait_for_examine" 00:26:12.758 } 00:26:12.758 ] 00:26:12.758 } 00:26:12.758 ] 00:26:12.758 } 00:26:12.758 [2024-11-18 01:08:46.997090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:12.758 [2024-11-18 01:08:46.997469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143837 ] 00:26:12.758 [2024-11-18 01:08:47.152015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.017 [2024-11-18 01:08:47.224659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.017  [2024-11-18T01:08:47.982Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:26:13.583 00:26:13.583 01:08:47 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:13.583 01:08:47 -- dd/basic_rw.sh@23 -- # count=15 00:26:13.583 01:08:47 -- dd/basic_rw.sh@24 -- # count=15 00:26:13.583 01:08:47 -- dd/basic_rw.sh@25 -- # size=61440 00:26:13.583 01:08:47 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:13.583 01:08:47 -- dd/common.sh@98 -- # xtrace_disable 00:26:13.583 01:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.151 01:08:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:26:14.151 01:08:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:14.151 01:08:48 -- dd/common.sh@31 -- # xtrace_disable 00:26:14.151 01:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:14.151 [2024-11-18 01:08:48.416605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:14.151 [2024-11-18 01:08:48.416824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143857 ] 00:26:14.151 { 00:26:14.151 "subsystems": [ 00:26:14.151 { 00:26:14.151 "subsystem": "bdev", 00:26:14.151 "config": [ 00:26:14.151 { 00:26:14.151 "params": { 00:26:14.151 "trtype": "pcie", 00:26:14.151 "traddr": "0000:00:06.0", 00:26:14.151 "name": "Nvme0" 00:26:14.151 }, 00:26:14.151 "method": "bdev_nvme_attach_controller" 00:26:14.151 }, 00:26:14.151 { 00:26:14.151 "method": "bdev_wait_for_examine" 00:26:14.151 } 00:26:14.151 ] 00:26:14.151 } 00:26:14.151 ] 00:26:14.151 } 00:26:14.410 [2024-11-18 01:08:48.560256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.410 [2024-11-18 01:08:48.631364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.669  [2024-11-18T01:08:49.327Z] Copying: 60/60 [kB] (average 58 MBps) 00:26:14.928 00:26:14.928 01:08:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:26:14.928 01:08:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:14.928 01:08:49 -- dd/common.sh@31 -- # xtrace_disable 00:26:14.928 01:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:14.928 { 00:26:14.928 "subsystems": [ 00:26:14.928 { 00:26:14.928 "subsystem": "bdev", 00:26:14.928 "config": [ 00:26:14.928 { 00:26:14.928 "params": { 00:26:14.928 "trtype": "pcie", 00:26:14.928 "traddr": "0000:00:06.0", 00:26:14.928 "name": "Nvme0" 00:26:14.928 }, 00:26:14.928 "method": "bdev_nvme_attach_controller" 00:26:14.928 }, 00:26:14.928 { 00:26:14.928 "method": "bdev_wait_for_examine" 00:26:14.928 } 00:26:14.928 ] 00:26:14.928 } 00:26:14.928 ] 00:26:14.928 } 00:26:14.928 [2024-11-18 01:08:49.320054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:14.928 [2024-11-18 01:08:49.320466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143877 ] 00:26:15.197 [2024-11-18 01:08:49.476171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.197 [2024-11-18 01:08:49.549430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.468  [2024-11-18T01:08:50.441Z] Copying: 60/60 [kB] (average 58 MBps) 00:26:16.042 00:26:16.042 01:08:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:16.042 01:08:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:16.042 01:08:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:16.042 01:08:50 -- dd/common.sh@11 -- # local nvme_ref= 00:26:16.042 01:08:50 -- dd/common.sh@12 -- # local size=61440 00:26:16.042 01:08:50 -- dd/common.sh@14 -- # local bs=1048576 00:26:16.042 01:08:50 -- dd/common.sh@15 -- # local count=1 00:26:16.042 01:08:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:16.042 01:08:50 -- dd/common.sh@18 -- # gen_conf 00:26:16.042 01:08:50 -- dd/common.sh@31 -- # xtrace_disable 00:26:16.042 01:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:16.042 { 00:26:16.042 "subsystems": [ 00:26:16.042 { 00:26:16.042 "subsystem": "bdev", 00:26:16.042 "config": [ 00:26:16.042 { 00:26:16.042 "params": { 00:26:16.042 "trtype": "pcie", 00:26:16.042 "traddr": "0000:00:06.0", 00:26:16.042 "name": "Nvme0" 00:26:16.042 }, 00:26:16.042 "method": "bdev_nvme_attach_controller" 00:26:16.042 }, 00:26:16.042 { 00:26:16.042 "method": "bdev_wait_for_examine" 00:26:16.043 } 00:26:16.043 ] 00:26:16.043 } 00:26:16.043 ] 00:26:16.043 } 00:26:16.043 [2024-11-18 01:08:50.255551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:16.043 [2024-11-18 01:08:50.256365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143898 ] 00:26:16.043 [2024-11-18 01:08:50.412007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.302 [2024-11-18 01:08:50.484202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.637  [2024-11-18T01:08:51.314Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:16.915 00:26:16.915 01:08:51 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:16.915 01:08:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:16.915 01:08:51 -- dd/basic_rw.sh@23 -- # count=7 00:26:16.915 01:08:51 -- dd/basic_rw.sh@24 -- # count=7 00:26:16.915 01:08:51 -- dd/basic_rw.sh@25 -- # size=57344 00:26:16.915 01:08:51 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:16.915 01:08:51 -- dd/common.sh@98 -- # xtrace_disable 00:26:16.915 01:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:17.483 01:08:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:26:17.483 01:08:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:17.483 01:08:51 -- dd/common.sh@31 -- # xtrace_disable 00:26:17.483 01:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:17.483 [2024-11-18 01:08:51.687004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:17.483 [2024-11-18 01:08:51.687210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143925 ] 00:26:17.483 { 00:26:17.483 "subsystems": [ 00:26:17.483 { 00:26:17.483 "subsystem": "bdev", 00:26:17.483 "config": [ 00:26:17.483 { 00:26:17.483 "params": { 00:26:17.483 "trtype": "pcie", 00:26:17.483 "traddr": "0000:00:06.0", 00:26:17.483 "name": "Nvme0" 00:26:17.483 }, 00:26:17.483 "method": "bdev_nvme_attach_controller" 00:26:17.483 }, 00:26:17.483 { 00:26:17.483 "method": "bdev_wait_for_examine" 00:26:17.483 } 00:26:17.483 ] 00:26:17.483 } 00:26:17.483 ] 00:26:17.483 } 00:26:17.483 [2024-11-18 01:08:51.831655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.742 [2024-11-18 01:08:51.904691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.742  [2024-11-18T01:08:52.708Z] Copying: 56/56 [kB] (average 54 MBps) 00:26:18.309 00:26:18.309 01:08:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:18.309 01:08:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:26:18.309 01:08:52 -- dd/common.sh@31 -- # xtrace_disable 00:26:18.309 01:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:18.309 { 00:26:18.309 "subsystems": [ 00:26:18.309 { 00:26:18.309 "subsystem": "bdev", 00:26:18.309 "config": [ 00:26:18.309 { 00:26:18.309 "params": { 00:26:18.309 "trtype": "pcie", 00:26:18.309 "traddr": "0000:00:06.0", 00:26:18.309 "name": "Nvme0" 00:26:18.309 }, 00:26:18.309 "method": "bdev_nvme_attach_controller" 00:26:18.309 }, 00:26:18.309 { 00:26:18.309 "method": "bdev_wait_for_examine" 00:26:18.309 } 00:26:18.309 ] 00:26:18.309 } 00:26:18.309 ] 00:26:18.309 } 00:26:18.309 [2024-11-18 01:08:52.594278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:18.309 [2024-11-18 01:08:52.594536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143937 ] 00:26:18.568 [2024-11-18 01:08:52.751879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.568 [2024-11-18 01:08:52.824434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.827  [2024-11-18T01:08:53.484Z] Copying: 56/56 [kB] (average 54 MBps) 00:26:19.085 00:26:19.085 01:08:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:19.085 01:08:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:19.085 01:08:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:19.085 01:08:53 -- dd/common.sh@11 -- # local nvme_ref= 00:26:19.085 01:08:53 -- dd/common.sh@12 -- # local size=57344 00:26:19.085 01:08:53 -- dd/common.sh@14 -- # local bs=1048576 00:26:19.085 01:08:53 -- dd/common.sh@15 -- # local count=1 00:26:19.085 01:08:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:19.085 01:08:53 -- dd/common.sh@18 -- # gen_conf 00:26:19.085 01:08:53 -- dd/common.sh@31 -- # xtrace_disable 00:26:19.085 01:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:19.344 { 00:26:19.344 "subsystems": [ 00:26:19.344 { 00:26:19.344 "subsystem": "bdev", 00:26:19.344 "config": [ 00:26:19.344 { 00:26:19.344 "params": { 00:26:19.344 "trtype": "pcie", 00:26:19.344 "traddr": "0000:00:06.0", 00:26:19.344 "name": "Nvme0" 00:26:19.344 }, 00:26:19.344 "method": "bdev_nvme_attach_controller" 00:26:19.344 }, 00:26:19.344 { 00:26:19.344 "method": "bdev_wait_for_examine" 00:26:19.344 } 00:26:19.344 ] 00:26:19.344 } 00:26:19.344 ] 00:26:19.344 } 00:26:19.344 [2024-11-18 01:08:53.508738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:19.344 [2024-11-18 01:08:53.509181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143954 ] 00:26:19.344 [2024-11-18 01:08:53.733921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.603 [2024-11-18 01:08:53.821721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.861  [2024-11-18T01:08:54.530Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:20.131 00:26:20.131 01:08:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:20.131 01:08:54 -- dd/basic_rw.sh@23 -- # count=7 00:26:20.131 01:08:54 -- dd/basic_rw.sh@24 -- # count=7 00:26:20.131 01:08:54 -- dd/basic_rw.sh@25 -- # size=57344 00:26:20.131 01:08:54 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:20.131 01:08:54 -- dd/common.sh@98 -- # xtrace_disable 00:26:20.131 01:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.704 01:08:54 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:26:20.704 01:08:54 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:20.704 01:08:54 -- dd/common.sh@31 -- # xtrace_disable 00:26:20.704 01:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.704 { 00:26:20.704 "subsystems": [ 00:26:20.704 { 00:26:20.704 "subsystem": "bdev", 00:26:20.704 "config": [ 00:26:20.704 { 00:26:20.704 "params": { 00:26:20.704 "trtype": "pcie", 00:26:20.704 "traddr": "0000:00:06.0", 00:26:20.704 "name": "Nvme0" 00:26:20.704 }, 00:26:20.704 "method": "bdev_nvme_attach_controller" 00:26:20.704 }, 00:26:20.704 { 00:26:20.704 "method": "bdev_wait_for_examine" 00:26:20.704 } 00:26:20.704 ] 00:26:20.704 } 00:26:20.704 ] 00:26:20.704 } 00:26:20.704 [2024-11-18 01:08:54.971055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:20.704 [2024-11-18 01:08:54.971228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143985 ] 00:26:20.962 [2024-11-18 01:08:55.115390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.962 [2024-11-18 01:08:55.188496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.222  [2024-11-18T01:08:55.896Z] Copying: 56/56 [kB] (average 54 MBps) 00:26:21.497 00:26:21.497 01:08:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:26:21.497 01:08:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:21.497 01:08:55 -- dd/common.sh@31 -- # xtrace_disable 00:26:21.497 01:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:21.497 { 00:26:21.497 "subsystems": [ 00:26:21.497 { 00:26:21.497 "subsystem": "bdev", 00:26:21.497 "config": [ 00:26:21.497 { 00:26:21.497 "params": { 00:26:21.497 "trtype": "pcie", 00:26:21.497 "traddr": "0000:00:06.0", 00:26:21.497 "name": "Nvme0" 00:26:21.497 }, 00:26:21.497 "method": "bdev_nvme_attach_controller" 00:26:21.497 }, 00:26:21.497 { 00:26:21.497 "method": "bdev_wait_for_examine" 00:26:21.497 } 00:26:21.497 ] 00:26:21.497 } 00:26:21.497 ] 00:26:21.497 } 00:26:21.497 [2024-11-18 01:08:55.875675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:21.497 [2024-11-18 01:08:55.876126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144001 ] 00:26:21.769 [2024-11-18 01:08:56.031852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.769 [2024-11-18 01:08:56.103198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.039  [2024-11-18T01:08:57.012Z] Copying: 56/56 [kB] (average 54 MBps) 00:26:22.613 00:26:22.613 01:08:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:22.613 01:08:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:22.613 01:08:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:22.613 01:08:56 -- dd/common.sh@11 -- # local nvme_ref= 00:26:22.613 01:08:56 -- dd/common.sh@12 -- # local size=57344 00:26:22.613 01:08:56 -- dd/common.sh@14 -- # local bs=1048576 00:26:22.613 01:08:56 -- dd/common.sh@15 -- # local count=1 00:26:22.613 01:08:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:22.613 01:08:56 -- dd/common.sh@18 -- # gen_conf 00:26:22.613 01:08:56 -- dd/common.sh@31 -- # xtrace_disable 00:26:22.613 01:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:22.613 { 00:26:22.613 "subsystems": [ 00:26:22.613 { 00:26:22.613 "subsystem": "bdev", 00:26:22.613 "config": [ 00:26:22.613 { 00:26:22.613 "params": { 00:26:22.613 "trtype": "pcie", 00:26:22.613 "traddr": "0000:00:06.0", 00:26:22.613 "name": "Nvme0" 00:26:22.613 }, 00:26:22.613 "method": "bdev_nvme_attach_controller" 00:26:22.613 }, 00:26:22.613 { 00:26:22.613 "method": "bdev_wait_for_examine" 00:26:22.613 } 00:26:22.613 ] 00:26:22.613 } 00:26:22.613 ] 00:26:22.613 } 00:26:22.613 [2024-11-18 01:08:56.796203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:22.613 [2024-11-18 01:08:56.796385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144024 ] 00:26:22.613 [2024-11-18 01:08:56.939158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.613 [2024-11-18 01:08:57.013954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.872  [2024-11-18T01:08:57.839Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:23.440 00:26:23.440 01:08:57 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:23.440 01:08:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:23.440 01:08:57 -- dd/basic_rw.sh@23 -- # count=3 00:26:23.440 01:08:57 -- dd/basic_rw.sh@24 -- # count=3 00:26:23.440 01:08:57 -- dd/basic_rw.sh@25 -- # size=49152 00:26:23.440 01:08:57 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:23.440 01:08:57 -- dd/common.sh@98 -- # xtrace_disable 00:26:23.440 01:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.008 01:08:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:26:24.008 01:08:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:24.008 01:08:58 -- dd/common.sh@31 -- # xtrace_disable 00:26:24.008 01:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.008 { 00:26:24.008 "subsystems": [ 00:26:24.008 { 00:26:24.008 "subsystem": "bdev", 00:26:24.008 "config": [ 00:26:24.008 { 00:26:24.008 "params": { 00:26:24.008 "trtype": "pcie", 00:26:24.008 "traddr": "0000:00:06.0", 00:26:24.008 "name": "Nvme0" 00:26:24.008 }, 00:26:24.008 "method": "bdev_nvme_attach_controller" 00:26:24.008 }, 00:26:24.008 { 00:26:24.008 "method": "bdev_wait_for_examine" 00:26:24.008 } 00:26:24.008 ] 00:26:24.008 } 00:26:24.008 ] 00:26:24.008 } 00:26:24.008 [2024-11-18 01:08:58.171397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:24.008 [2024-11-18 01:08:58.171727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144044 ] 00:26:24.008 [2024-11-18 01:08:58.314620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.008 [2024-11-18 01:08:58.388743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.268  [2024-11-18T01:08:59.235Z] Copying: 48/48 [kB] (average 46 MBps) 00:26:24.836 00:26:24.836 01:08:59 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:26:24.836 01:08:59 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:24.836 01:08:59 -- dd/common.sh@31 -- # xtrace_disable 00:26:24.836 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:24.836 { 00:26:24.836 "subsystems": [ 00:26:24.836 { 00:26:24.836 "subsystem": "bdev", 00:26:24.836 "config": [ 00:26:24.836 { 00:26:24.836 "params": { 00:26:24.836 "trtype": "pcie", 00:26:24.836 "traddr": "0000:00:06.0", 00:26:24.836 "name": "Nvme0" 00:26:24.836 }, 00:26:24.836 "method": "bdev_nvme_attach_controller" 00:26:24.836 }, 00:26:24.836 { 00:26:24.836 "method": "bdev_wait_for_examine" 00:26:24.836 } 00:26:24.836 ] 00:26:24.836 } 00:26:24.836 ] 00:26:24.836 } 00:26:24.836 [2024-11-18 01:08:59.082520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:24.836 [2024-11-18 01:08:59.082975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144064 ] 00:26:25.093 [2024-11-18 01:08:59.240041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.093 [2024-11-18 01:08:59.313954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.351  [2024-11-18T01:09:00.009Z] Copying: 48/48 [kB] (average 46 MBps) 00:26:25.610 00:26:25.610 01:08:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:25.610 01:08:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:25.610 01:08:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:25.610 01:08:59 -- dd/common.sh@11 -- # local nvme_ref= 00:26:25.610 01:08:59 -- dd/common.sh@12 -- # local size=49152 00:26:25.610 01:08:59 -- dd/common.sh@14 -- # local bs=1048576 00:26:25.610 01:08:59 -- dd/common.sh@15 -- # local count=1 00:26:25.610 01:08:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:25.610 01:08:59 -- dd/common.sh@18 -- # gen_conf 00:26:25.610 01:08:59 -- dd/common.sh@31 -- # xtrace_disable 00:26:25.610 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.610 { 00:26:25.610 "subsystems": [ 00:26:25.610 { 00:26:25.610 "subsystem": "bdev", 00:26:25.610 "config": [ 00:26:25.610 { 00:26:25.610 "params": { 00:26:25.610 "trtype": "pcie", 00:26:25.610 "traddr": "0000:00:06.0", 00:26:25.610 "name": "Nvme0" 00:26:25.610 }, 00:26:25.610 "method": "bdev_nvme_attach_controller" 00:26:25.610 }, 00:26:25.610 { 00:26:25.610 "method": "bdev_wait_for_examine" 00:26:25.610 } 00:26:25.610 ] 00:26:25.610 } 00:26:25.610 ] 00:26:25.610 } 00:26:25.610 [2024-11-18 01:09:00.004517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:25.610 [2024-11-18 01:09:00.005562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144084 ] 00:26:25.868 [2024-11-18 01:09:00.165333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.868 [2024-11-18 01:09:00.245069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.127  [2024-11-18T01:09:01.093Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:26.694 00:26:26.694 01:09:00 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:26.694 01:09:00 -- dd/basic_rw.sh@23 -- # count=3 00:26:26.694 01:09:00 -- dd/basic_rw.sh@24 -- # count=3 00:26:26.694 01:09:00 -- dd/basic_rw.sh@25 -- # size=49152 00:26:26.694 01:09:00 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:26.694 01:09:00 -- dd/common.sh@98 -- # xtrace_disable 00:26:26.694 01:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:26.952 01:09:01 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:26:26.952 01:09:01 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:26.952 01:09:01 -- dd/common.sh@31 -- # xtrace_disable 00:26:26.952 01:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.210 { 00:26:27.210 "subsystems": [ 00:26:27.210 { 00:26:27.210 "subsystem": "bdev", 00:26:27.210 "config": [ 00:26:27.210 { 00:26:27.210 "params": { 00:26:27.210 "trtype": "pcie", 00:26:27.210 "traddr": "0000:00:06.0", 00:26:27.210 "name": "Nvme0" 00:26:27.210 }, 00:26:27.210 "method": "bdev_nvme_attach_controller" 00:26:27.210 }, 00:26:27.210 { 00:26:27.210 "method": "bdev_wait_for_examine" 00:26:27.210 } 00:26:27.210 ] 00:26:27.210 } 00:26:27.210 ] 00:26:27.210 } 00:26:27.210 [2024-11-18 01:09:01.380878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:27.210 [2024-11-18 01:09:01.381883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144105 ] 00:26:27.210 [2024-11-18 01:09:01.536452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.210 [2024-11-18 01:09:01.604747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.469  [2024-11-18T01:09:02.461Z] Copying: 48/48 [kB] (average 46 MBps) 00:26:28.062 00:26:28.062 01:09:02 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:26:28.062 01:09:02 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:28.062 01:09:02 -- dd/common.sh@31 -- # xtrace_disable 00:26:28.062 01:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.062 { 00:26:28.062 "subsystems": [ 00:26:28.062 { 00:26:28.062 "subsystem": "bdev", 00:26:28.062 "config": [ 00:26:28.062 { 00:26:28.062 "params": { 00:26:28.062 "trtype": "pcie", 00:26:28.062 "traddr": "0000:00:06.0", 00:26:28.062 "name": "Nvme0" 00:26:28.062 }, 00:26:28.062 "method": "bdev_nvme_attach_controller" 00:26:28.062 }, 00:26:28.062 { 00:26:28.062 "method": "bdev_wait_for_examine" 00:26:28.062 } 00:26:28.062 ] 00:26:28.062 } 00:26:28.062 ] 00:26:28.062 } 00:26:28.062 [2024-11-18 01:09:02.276004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:28.062 [2024-11-18 01:09:02.276435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144120 ] 00:26:28.062 [2024-11-18 01:09:02.432250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.320 [2024-11-18 01:09:02.504208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.320  [2024-11-18T01:09:03.317Z] Copying: 48/48 [kB] (average 46 MBps) 00:26:28.918 00:26:28.918 01:09:03 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:28.918 01:09:03 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:28.918 01:09:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:28.918 01:09:03 -- dd/common.sh@11 -- # local nvme_ref= 00:26:28.918 01:09:03 -- dd/common.sh@12 -- # local size=49152 00:26:28.918 01:09:03 -- dd/common.sh@14 -- # local bs=1048576 00:26:28.918 01:09:03 -- dd/common.sh@15 -- # local count=1 00:26:28.918 01:09:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:28.918 01:09:03 -- dd/common.sh@18 -- # gen_conf 00:26:28.918 01:09:03 -- dd/common.sh@31 -- # xtrace_disable 00:26:28.918 01:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.918 { 00:26:28.918 "subsystems": [ 00:26:28.918 { 00:26:28.918 "subsystem": "bdev", 00:26:28.918 "config": [ 00:26:28.918 { 00:26:28.918 "params": { 00:26:28.918 "trtype": "pcie", 00:26:28.918 "traddr": "0000:00:06.0", 00:26:28.918 "name": "Nvme0" 00:26:28.918 }, 00:26:28.918 "method": "bdev_nvme_attach_controller" 00:26:28.918 }, 00:26:28.918 { 00:26:28.918 "method": "bdev_wait_for_examine" 00:26:28.918 } 00:26:28.918 ] 00:26:28.918 } 00:26:28.918 ] 00:26:28.918 } 00:26:28.918 [2024-11-18 01:09:03.200543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:28.918 [2024-11-18 01:09:03.201207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144141 ] 00:26:29.177 [2024-11-18 01:09:03.356469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.177 [2024-11-18 01:09:03.423518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.435  [2024-11-18T01:09:04.093Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:26:29.694 00:26:29.694 00:26:29.694 real 0m19.457s 00:26:29.694 user 0m12.569s 00:26:29.694 sys 0m5.496s 00:26:29.694 ************************************ 00:26:29.694 END TEST dd_rw 00:26:29.694 ************************************ 00:26:29.694 01:09:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:29.694 01:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:29.953 01:09:04 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:26:29.953 01:09:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:29.953 01:09:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:29.953 01:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:29.953 ************************************ 00:26:29.953 START TEST dd_rw_offset 00:26:29.953 ************************************ 00:26:29.953 01:09:04 -- common/autotest_common.sh@1114 -- # basic_offset 00:26:29.953 01:09:04 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:26:29.953 01:09:04 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:26:29.953 01:09:04 -- dd/common.sh@98 -- # xtrace_disable 00:26:29.953 01:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:29.953 01:09:04 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:26:29.953 01:09:04 -- dd/basic_rw.sh@56 -- # data=r9jekcmjfmm356defdsqsj59sbro7butjzouhjqk8zq62tldk8hyiihi2ync0p07fy4mx7dpx6sraw0de8joh5ypydbrdw20ivjcyw1uwfe6i0xmcuhahq0hc8lmzkohyt51711timoniod5m7567u86w5954octp789zyfaichrtjagip1w7snewcrtekpdpli6qs1r4153dd3cxi5urycahtuz1s3eixcrrbyqhuqy6xlq2mtim6jngucc6o8kng2gifvcv7os3byqcnz8jj9qzp26kcl8vgdldde15dp92jgqaim8b2890bn6v5dcgnx3ccwllka2jf57xjgsoaq9h9o4t119mmvgqundrtm1hwv5l30hewogojiezlxjnzutft4vjaqv3mxre27zqcied6261v5i3y095ifphu397bksqaltli8xlfqwk2zupvh1y3l4cx3xidoetbmmqk2pvj2u3rlr0cn3y8xa80dezdnglo901qyc5nfc27h8fd0cyxxthza0iofafzqzkh52v0e9uf2yfpos1sfvi6xla1xvk75qijqx8u56xu8u1b3wrwk0o2bsy2vhm91xlmoosalx1959jyi9ceqttsmw869dbj1v8jm8c23vdpg0ejvov1koa8kc6aaiwx2tlpekvr87fdwj4jpep14yh68j8efsg3p10su4rdt1tp0pc2sff9byxb11iq5rsqflalw8mrzh51dpgkujgtm4ne4wkk8szyspl23ynys9kovf3wcgwkpq68wutbwbx6u4293utlh48vp1oig3o3bzjv7sjhmam5vc0omq98cqhxreom47buwvrujfb1p5v8z697oy4phnxxb6dfk3ypsxnkpg8zvv5xc60c1ybzbcbadx8knqdb9ddp4q5swglfpbaaf0w517rjibxmszqm1kuvo5gx1yydrinnvamkyfp0gaq68flikuhhih3rsagw5c1t9iynon3h7m8k6a5xme4bi4iflrb7maoy9vwmou2z8vmur9ioxzq1ari5buc0ehlv7x3e5i3bhggmxupprv56br5yx2zx3b42ixrhitw4vvr9odig76ndgkgie7swxoggtlfvvucqw2yub723fb8qc4q1yd3pmr9wk8dg8sna1ppxrfuehllzysrqe1eh4e7jjc85vvjsxkljj4shf7c0cjddop5h8e9yln36te2c1n6q2ccqrjrvyk0usv0dj54ov94iygkp9ya8lhvazpq88dw0hqjk35gc41ejdcrg7i5o5wmppwmsffu5ihmg2iuz3gc991r3fucpy2ubq775p61ub4vd2wif1als8ypx9y53rgkgiom406v6si5q5o0i0dmvy4y3nkxy683o2tita3yb14kkw75etnsnvqsveqr4thw0kn8v6otic0h3uehdzr55bgg3t9ja4z7utwzaqc6yv5d593hl5kxp40qw8nmg9gp8c50cso7yjet1fycf6skp95vo5xrcac9b2y93sy2cwp5kq0i7rxtr3w3tvllxjc3ey1mnaehmefke0elb8aguy94cnlvy3eqtbaxk4zjmmzs1f4h5f9ulbt6zpf1ywzlftjlews9qfvwamqvqrm3hkfw6qkyam5c2i4rwqcycm2gttynv56pv06gw3i5w43oz6prh9fynthcadiwgoee1jl8fl1nwxp3l43pvomx0npyfk1lv6o908ma1umy2osgtqtt4lqyg1cf3hqpt97io6k9xgbjteis4qjubmsqkk987m9eyjpehoevaxwnq63ib37nfnf60x1umdgc0fojvwj9djpztbw9533p80rtfc3kv6jywdg7dsu4g0lvvyv2e9wx1w5do9105bpzin1xrayig6s9q1to08fv7ceakqd1uaz2ts0brghmiu3kad2bbh8smcvce3blllwv6eqc9ps8rtuyvigsjztw3uc88os0a4c46cl98g463o3kimcbansx1w7cz4rwgpwxqfs0lgyq28gze3yiim1iipzj6vztnmpphgdgmy4qo3ztlkvs8zm9w2d7hk8xssunq5jbs0n8yjtyiy7g9xqe5sfibgyw2xoz5sh8gx4kuhwnx30wpwf1yeo9z2j5dthhzbr2v79xi6k1g74uq6bc63k4vc1n6yi7m6j0mh3crov5saptt37e0oxunm5swa9fenjncivf88s1ri5bj6sflsycnzv7hrjd6n636vi3kz2ktd7nee0zsp9khziw4mam7zqax3xpgzey4pq1nbhp5s1859c8637ficq0w41nbc7vlodeurb9zjs4v1gbtc0vhxz5oii0c4xkx0afaaprst3z235sw6qshnl8ul0uhokmp66p3h65hy1jmbbt8i9ehx23y6skjq2mtfuv55emjp9p7p3ys21qcf7oowbmizc85wvgi6oddm58mdh3n0smwpjcu69lemk7o60pobmkrhd947p2mitw5troj17ntetmyz3gh7q0fgjirmy3cdc3zos49el570afe3nol4gtmpiu70zh6mqvgvy6wfv8zi44tsnzk0d475s2ebwg9wtmsfj7g5z7ax3g2ie0enkuh52ij2qs2fggo39clbild4leuj1qckngdq843cdjxhykg7n3dg24tovdbjujwrhhszpnqn9pduh3jitcifk3bfcvhmxos26szuwld20pgnffxk7ptm4lkeygip6ban92p2eyodelje5jl5wdm9t560nei0ydila71imxrdfx3s4h1gyorhtszabxte2qwjpkh2uxxugfyixtmbyys57lynbu3f6bgj4ieefu1rfpi0lmsc6h3pp84svbciz1e7oey4xc49r1btzqpsyb5h3wtuu92bl8trcfazx061omw1qab28rvkqbfg1x89w7pf84zi2r2qc3o9x6thmyesmx14xfnh5vp7ej9khlosb15elic5jgcnyi3p0n8xlufwk4y7sefmeo0l3yy9olnd8lzkndgn0i4o0nt2dic4ncb4bsnlfslwmrra71cuxqbl62fjneohhq54ykc5odfd14937qug07v2upsyhe1wpfjxipss3c6meoj3lfje4syuqhjzdwjwjojhr5qiqmvk92jzo66ze4fr4r1hsh6pqj06bwklfga1m9ua17itbf77oc980mjdjh66ais3zdix2v5wwpydjw6qfyfukffumo393pvdi182j4apsre104l641xewea0a6c69rzwyhv55goakw19lvnwrk56zp3qw66qhiqxqbbnbbxvcfdbil0e5nm3hllpmywzw2hqnvp08jg9e1rk2oxn60qem4bwb3yw6t4i147jygippsl7jq56lzflh2w9y08blqphcn1kpftvrh9wczthqo1pi5d95ydk1cd02attv6xyh0o0gelj8y2t6j3yanx9manqvk046ibbni4hy0437u8w0gb3td3j5xx7md56t13j24w69kma0ebi8pvjt0qw9hge5xc09dbonc47hz1r7dykar3etz55b70smcvkwwxj3snfq1huugz1vudq38gzpyxosl8efq0qp21q4spinwa4s1m68dnd1y5ipj5tb5upmrrynvbckcs6idm2sijosod28ev6ti183m8nsx4l6qe3v9alnbpjermurtxni0zoiutsopo6hpii75e26npvxtuxn7nes08dhpvsgt3dh9qu918iq3ookka5wrbs5gpxu2i0zfbrtf7x28c387eyhqjwqeyu6ihi4u6umf0pg993pgujqgop7hpyw1zqxagcgz1dixhla6g5buxykzm0du3l50bijbgzwo8ha3jcyx087ehjkbtrif0ly2czhpelwlcxzg52sdmky99q9bw52vparqbr7ny3ezjilamqrb71l4ed173fc0zevx3f7ivcixnwxsmyf21nwo1lgtkkd4c07jw93r7exrmuva4ocbi7oc12az1mbxb23nnp0hx4aftt9ill61xekts3nb59l7zgnok7c4xb9wjjf10x3s8bv4hxhzog3b6asrbpt12f1rpxvovrgjjpthvtwp9ft8a8vw53tr0ya 00:26:29.953 01:09:04 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:26:29.953 01:09:04 -- dd/basic_rw.sh@59 -- # gen_conf 00:26:29.953 01:09:04 -- dd/common.sh@31 -- # xtrace_disable 00:26:29.953 01:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:29.953 { 00:26:29.953 "subsystems": [ 00:26:29.953 { 00:26:29.953 "subsystem": "bdev", 00:26:29.953 "config": [ 00:26:29.953 { 00:26:29.953 "params": { 00:26:29.953 "trtype": "pcie", 00:26:29.953 "traddr": "0000:00:06.0", 00:26:29.953 "name": "Nvme0" 00:26:29.953 }, 00:26:29.953 "method": "bdev_nvme_attach_controller" 00:26:29.953 }, 00:26:29.953 { 00:26:29.953 "method": "bdev_wait_for_examine" 00:26:29.953 } 00:26:29.953 ] 00:26:29.953 } 00:26:29.953 ] 00:26:29.953 } 00:26:29.953 [2024-11-18 01:09:04.240524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:29.953 [2024-11-18 01:09:04.241203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144181 ] 00:26:30.212 [2024-11-18 01:09:04.396165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.212 [2024-11-18 01:09:04.463801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.471  [2024-11-18T01:09:05.129Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:26:30.730 00:26:30.730 01:09:05 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:26:30.730 01:09:05 -- dd/basic_rw.sh@65 -- # gen_conf 00:26:30.730 01:09:05 -- dd/common.sh@31 -- # xtrace_disable 00:26:30.730 01:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:30.730 [2024-11-18 01:09:05.130008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:30.730 [2024-11-18 01:09:05.130443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144205 ] 00:26:30.989 { 00:26:30.989 "subsystems": [ 00:26:30.989 { 00:26:30.989 "subsystem": "bdev", 00:26:30.989 "config": [ 00:26:30.989 { 00:26:30.989 "params": { 00:26:30.989 "trtype": "pcie", 00:26:30.989 "traddr": "0000:00:06.0", 00:26:30.989 "name": "Nvme0" 00:26:30.989 }, 00:26:30.989 "method": "bdev_nvme_attach_controller" 00:26:30.989 }, 00:26:30.989 { 00:26:30.989 "method": "bdev_wait_for_examine" 00:26:30.989 } 00:26:30.989 ] 00:26:30.989 } 00:26:30.989 ] 00:26:30.989 } 00:26:30.989 [2024-11-18 01:09:05.272335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.989 [2024-11-18 01:09:05.339657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.248  [2024-11-18T01:09:06.216Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:26:31.817 00:26:31.817 01:09:05 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:26:31.818 01:09:05 -- dd/basic_rw.sh@72 -- # [[ r9jekcmjfmm356defdsqsj59sbro7butjzouhjqk8zq62tldk8hyiihi2ync0p07fy4mx7dpx6sraw0de8joh5ypydbrdw20ivjcyw1uwfe6i0xmcuhahq0hc8lmzkohyt51711timoniod5m7567u86w5954octp789zyfaichrtjagip1w7snewcrtekpdpli6qs1r4153dd3cxi5urycahtuz1s3eixcrrbyqhuqy6xlq2mtim6jngucc6o8kng2gifvcv7os3byqcnz8jj9qzp26kcl8vgdldde15dp92jgqaim8b2890bn6v5dcgnx3ccwllka2jf57xjgsoaq9h9o4t119mmvgqundrtm1hwv5l30hewogojiezlxjnzutft4vjaqv3mxre27zqcied6261v5i3y095ifphu397bksqaltli8xlfqwk2zupvh1y3l4cx3xidoetbmmqk2pvj2u3rlr0cn3y8xa80dezdnglo901qyc5nfc27h8fd0cyxxthza0iofafzqzkh52v0e9uf2yfpos1sfvi6xla1xvk75qijqx8u56xu8u1b3wrwk0o2bsy2vhm91xlmoosalx1959jyi9ceqttsmw869dbj1v8jm8c23vdpg0ejvov1koa8kc6aaiwx2tlpekvr87fdwj4jpep14yh68j8efsg3p10su4rdt1tp0pc2sff9byxb11iq5rsqflalw8mrzh51dpgkujgtm4ne4wkk8szyspl23ynys9kovf3wcgwkpq68wutbwbx6u4293utlh48vp1oig3o3bzjv7sjhmam5vc0omq98cqhxreom47buwvrujfb1p5v8z697oy4phnxxb6dfk3ypsxnkpg8zvv5xc60c1ybzbcbadx8knqdb9ddp4q5swglfpbaaf0w517rjibxmszqm1kuvo5gx1yydrinnvamkyfp0gaq68flikuhhih3rsagw5c1t9iynon3h7m8k6a5xme4bi4iflrb7maoy9vwmou2z8vmur9ioxzq1ari5buc0ehlv7x3e5i3bhggmxupprv56br5yx2zx3b42ixrhitw4vvr9odig76ndgkgie7swxoggtlfvvucqw2yub723fb8qc4q1yd3pmr9wk8dg8sna1ppxrfuehllzysrqe1eh4e7jjc85vvjsxkljj4shf7c0cjddop5h8e9yln36te2c1n6q2ccqrjrvyk0usv0dj54ov94iygkp9ya8lhvazpq88dw0hqjk35gc41ejdcrg7i5o5wmppwmsffu5ihmg2iuz3gc991r3fucpy2ubq775p61ub4vd2wif1als8ypx9y53rgkgiom406v6si5q5o0i0dmvy4y3nkxy683o2tita3yb14kkw75etnsnvqsveqr4thw0kn8v6otic0h3uehdzr55bgg3t9ja4z7utwzaqc6yv5d593hl5kxp40qw8nmg9gp8c50cso7yjet1fycf6skp95vo5xrcac9b2y93sy2cwp5kq0i7rxtr3w3tvllxjc3ey1mnaehmefke0elb8aguy94cnlvy3eqtbaxk4zjmmzs1f4h5f9ulbt6zpf1ywzlftjlews9qfvwamqvqrm3hkfw6qkyam5c2i4rwqcycm2gttynv56pv06gw3i5w43oz6prh9fynthcadiwgoee1jl8fl1nwxp3l43pvomx0npyfk1lv6o908ma1umy2osgtqtt4lqyg1cf3hqpt97io6k9xgbjteis4qjubmsqkk987m9eyjpehoevaxwnq63ib37nfnf60x1umdgc0fojvwj9djpztbw9533p80rtfc3kv6jywdg7dsu4g0lvvyv2e9wx1w5do9105bpzin1xrayig6s9q1to08fv7ceakqd1uaz2ts0brghmiu3kad2bbh8smcvce3blllwv6eqc9ps8rtuyvigsjztw3uc88os0a4c46cl98g463o3kimcbansx1w7cz4rwgpwxqfs0lgyq28gze3yiim1iipzj6vztnmpphgdgmy4qo3ztlkvs8zm9w2d7hk8xssunq5jbs0n8yjtyiy7g9xqe5sfibgyw2xoz5sh8gx4kuhwnx30wpwf1yeo9z2j5dthhzbr2v79xi6k1g74uq6bc63k4vc1n6yi7m6j0mh3crov5saptt37e0oxunm5swa9fenjncivf88s1ri5bj6sflsycnzv7hrjd6n636vi3kz2ktd7nee0zsp9khziw4mam7zqax3xpgzey4pq1nbhp5s1859c8637ficq0w41nbc7vlodeurb9zjs4v1gbtc0vhxz5oii0c4xkx0afaaprst3z235sw6qshnl8ul0uhokmp66p3h65hy1jmbbt8i9ehx23y6skjq2mtfuv55emjp9p7p3ys21qcf7oowbmizc85wvgi6oddm58mdh3n0smwpjcu69lemk7o60pobmkrhd947p2mitw5troj17ntetmyz3gh7q0fgjirmy3cdc3zos49el570afe3nol4gtmpiu70zh6mqvgvy6wfv8zi44tsnzk0d475s2ebwg9wtmsfj7g5z7ax3g2ie0enkuh52ij2qs2fggo39clbild4leuj1qckngdq843cdjxhykg7n3dg24tovdbjujwrhhszpnqn9pduh3jitcifk3bfcvhmxos26szuwld20pgnffxk7ptm4lkeygip6ban92p2eyodelje5jl5wdm9t560nei0ydila71imxrdfx3s4h1gyorhtszabxte2qwjpkh2uxxugfyixtmbyys57lynbu3f6bgj4ieefu1rfpi0lmsc6h3pp84svbciz1e7oey4xc49r1btzqpsyb5h3wtuu92bl8trcfazx061omw1qab28rvkqbfg1x89w7pf84zi2r2qc3o9x6thmyesmx14xfnh5vp7ej9khlosb15elic5jgcnyi3p0n8xlufwk4y7sefmeo0l3yy9olnd8lzkndgn0i4o0nt2dic4ncb4bsnlfslwmrra71cuxqbl62fjneohhq54ykc5odfd14937qug07v2upsyhe1wpfjxipss3c6meoj3lfje4syuqhjzdwjwjojhr5qiqmvk92jzo66ze4fr4r1hsh6pqj06bwklfga1m9ua17itbf77oc980mjdjh66ais3zdix2v5wwpydjw6qfyfukffumo393pvdi182j4apsre104l641xewea0a6c69rzwyhv55goakw19lvnwrk56zp3qw66qhiqxqbbnbbxvcfdbil0e5nm3hllpmywzw2hqnvp08jg9e1rk2oxn60qem4bwb3yw6t4i147jygippsl7jq56lzflh2w9y08blqphcn1kpftvrh9wczthqo1pi5d95ydk1cd02attv6xyh0o0gelj8y2t6j3yanx9manqvk046ibbni4hy0437u8w0gb3td3j5xx7md56t13j24w69kma0ebi8pvjt0qw9hge5xc09dbonc47hz1r7dykar3etz55b70smcvkwwxj3snfq1huugz1vudq38gzpyxosl8efq0qp21q4spinwa4s1m68dnd1y5ipj5tb5upmrrynvbckcs6idm2sijosod28ev6ti183m8nsx4l6qe3v9alnbpjermurtxni0zoiutsopo6hpii75e26npvxtuxn7nes08dhpvsgt3dh9qu918iq3ookka5wrbs5gpxu2i0zfbrtf7x28c387eyhqjwqeyu6ihi4u6umf0pg993pgujqgop7hpyw1zqxagcgz1dixhla6g5buxykzm0du3l50bijbgzwo8ha3jcyx087ehjkbtrif0ly2czhpelwlcxzg52sdmky99q9bw52vparqbr7ny3ezjilamqrb71l4ed173fc0zevx3f7ivcixnwxsmyf21nwo1lgtkkd4c07jw93r7exrmuva4ocbi7oc12az1mbxb23nnp0hx4aftt9ill61xekts3nb59l7zgnok7c4xb9wjjf10x3s8bv4hxhzog3b6asrbpt12f1rpxvovrgjjpthvtwp9ft8a8vw53tr0ya == \r\9\j\e\k\c\m\j\f\m\m\3\5\6\d\e\f\d\s\q\s\j\5\9\s\b\r\o\7\b\u\t\j\z\o\u\h\j\q\k\8\z\q\6\2\t\l\d\k\8\h\y\i\i\h\i\2\y\n\c\0\p\0\7\f\y\4\m\x\7\d\p\x\6\s\r\a\w\0\d\e\8\j\o\h\5\y\p\y\d\b\r\d\w\2\0\i\v\j\c\y\w\1\u\w\f\e\6\i\0\x\m\c\u\h\a\h\q\0\h\c\8\l\m\z\k\o\h\y\t\5\1\7\1\1\t\i\m\o\n\i\o\d\5\m\7\5\6\7\u\8\6\w\5\9\5\4\o\c\t\p\7\8\9\z\y\f\a\i\c\h\r\t\j\a\g\i\p\1\w\7\s\n\e\w\c\r\t\e\k\p\d\p\l\i\6\q\s\1\r\4\1\5\3\d\d\3\c\x\i\5\u\r\y\c\a\h\t\u\z\1\s\3\e\i\x\c\r\r\b\y\q\h\u\q\y\6\x\l\q\2\m\t\i\m\6\j\n\g\u\c\c\6\o\8\k\n\g\2\g\i\f\v\c\v\7\o\s\3\b\y\q\c\n\z\8\j\j\9\q\z\p\2\6\k\c\l\8\v\g\d\l\d\d\e\1\5\d\p\9\2\j\g\q\a\i\m\8\b\2\8\9\0\b\n\6\v\5\d\c\g\n\x\3\c\c\w\l\l\k\a\2\j\f\5\7\x\j\g\s\o\a\q\9\h\9\o\4\t\1\1\9\m\m\v\g\q\u\n\d\r\t\m\1\h\w\v\5\l\3\0\h\e\w\o\g\o\j\i\e\z\l\x\j\n\z\u\t\f\t\4\v\j\a\q\v\3\m\x\r\e\2\7\z\q\c\i\e\d\6\2\6\1\v\5\i\3\y\0\9\5\i\f\p\h\u\3\9\7\b\k\s\q\a\l\t\l\i\8\x\l\f\q\w\k\2\z\u\p\v\h\1\y\3\l\4\c\x\3\x\i\d\o\e\t\b\m\m\q\k\2\p\v\j\2\u\3\r\l\r\0\c\n\3\y\8\x\a\8\0\d\e\z\d\n\g\l\o\9\0\1\q\y\c\5\n\f\c\2\7\h\8\f\d\0\c\y\x\x\t\h\z\a\0\i\o\f\a\f\z\q\z\k\h\5\2\v\0\e\9\u\f\2\y\f\p\o\s\1\s\f\v\i\6\x\l\a\1\x\v\k\7\5\q\i\j\q\x\8\u\5\6\x\u\8\u\1\b\3\w\r\w\k\0\o\2\b\s\y\2\v\h\m\9\1\x\l\m\o\o\s\a\l\x\1\9\5\9\j\y\i\9\c\e\q\t\t\s\m\w\8\6\9\d\b\j\1\v\8\j\m\8\c\2\3\v\d\p\g\0\e\j\v\o\v\1\k\o\a\8\k\c\6\a\a\i\w\x\2\t\l\p\e\k\v\r\8\7\f\d\w\j\4\j\p\e\p\1\4\y\h\6\8\j\8\e\f\s\g\3\p\1\0\s\u\4\r\d\t\1\t\p\0\p\c\2\s\f\f\9\b\y\x\b\1\1\i\q\5\r\s\q\f\l\a\l\w\8\m\r\z\h\5\1\d\p\g\k\u\j\g\t\m\4\n\e\4\w\k\k\8\s\z\y\s\p\l\2\3\y\n\y\s\9\k\o\v\f\3\w\c\g\w\k\p\q\6\8\w\u\t\b\w\b\x\6\u\4\2\9\3\u\t\l\h\4\8\v\p\1\o\i\g\3\o\3\b\z\j\v\7\s\j\h\m\a\m\5\v\c\0\o\m\q\9\8\c\q\h\x\r\e\o\m\4\7\b\u\w\v\r\u\j\f\b\1\p\5\v\8\z\6\9\7\o\y\4\p\h\n\x\x\b\6\d\f\k\3\y\p\s\x\n\k\p\g\8\z\v\v\5\x\c\6\0\c\1\y\b\z\b\c\b\a\d\x\8\k\n\q\d\b\9\d\d\p\4\q\5\s\w\g\l\f\p\b\a\a\f\0\w\5\1\7\r\j\i\b\x\m\s\z\q\m\1\k\u\v\o\5\g\x\1\y\y\d\r\i\n\n\v\a\m\k\y\f\p\0\g\a\q\6\8\f\l\i\k\u\h\h\i\h\3\r\s\a\g\w\5\c\1\t\9\i\y\n\o\n\3\h\7\m\8\k\6\a\5\x\m\e\4\b\i\4\i\f\l\r\b\7\m\a\o\y\9\v\w\m\o\u\2\z\8\v\m\u\r\9\i\o\x\z\q\1\a\r\i\5\b\u\c\0\e\h\l\v\7\x\3\e\5\i\3\b\h\g\g\m\x\u\p\p\r\v\5\6\b\r\5\y\x\2\z\x\3\b\4\2\i\x\r\h\i\t\w\4\v\v\r\9\o\d\i\g\7\6\n\d\g\k\g\i\e\7\s\w\x\o\g\g\t\l\f\v\v\u\c\q\w\2\y\u\b\7\2\3\f\b\8\q\c\4\q\1\y\d\3\p\m\r\9\w\k\8\d\g\8\s\n\a\1\p\p\x\r\f\u\e\h\l\l\z\y\s\r\q\e\1\e\h\4\e\7\j\j\c\8\5\v\v\j\s\x\k\l\j\j\4\s\h\f\7\c\0\c\j\d\d\o\p\5\h\8\e\9\y\l\n\3\6\t\e\2\c\1\n\6\q\2\c\c\q\r\j\r\v\y\k\0\u\s\v\0\d\j\5\4\o\v\9\4\i\y\g\k\p\9\y\a\8\l\h\v\a\z\p\q\8\8\d\w\0\h\q\j\k\3\5\g\c\4\1\e\j\d\c\r\g\7\i\5\o\5\w\m\p\p\w\m\s\f\f\u\5\i\h\m\g\2\i\u\z\3\g\c\9\9\1\r\3\f\u\c\p\y\2\u\b\q\7\7\5\p\6\1\u\b\4\v\d\2\w\i\f\1\a\l\s\8\y\p\x\9\y\5\3\r\g\k\g\i\o\m\4\0\6\v\6\s\i\5\q\5\o\0\i\0\d\m\v\y\4\y\3\n\k\x\y\6\8\3\o\2\t\i\t\a\3\y\b\1\4\k\k\w\7\5\e\t\n\s\n\v\q\s\v\e\q\r\4\t\h\w\0\k\n\8\v\6\o\t\i\c\0\h\3\u\e\h\d\z\r\5\5\b\g\g\3\t\9\j\a\4\z\7\u\t\w\z\a\q\c\6\y\v\5\d\5\9\3\h\l\5\k\x\p\4\0\q\w\8\n\m\g\9\g\p\8\c\5\0\c\s\o\7\y\j\e\t\1\f\y\c\f\6\s\k\p\9\5\v\o\5\x\r\c\a\c\9\b\2\y\9\3\s\y\2\c\w\p\5\k\q\0\i\7\r\x\t\r\3\w\3\t\v\l\l\x\j\c\3\e\y\1\m\n\a\e\h\m\e\f\k\e\0\e\l\b\8\a\g\u\y\9\4\c\n\l\v\y\3\e\q\t\b\a\x\k\4\z\j\m\m\z\s\1\f\4\h\5\f\9\u\l\b\t\6\z\p\f\1\y\w\z\l\f\t\j\l\e\w\s\9\q\f\v\w\a\m\q\v\q\r\m\3\h\k\f\w\6\q\k\y\a\m\5\c\2\i\4\r\w\q\c\y\c\m\2\g\t\t\y\n\v\5\6\p\v\0\6\g\w\3\i\5\w\4\3\o\z\6\p\r\h\9\f\y\n\t\h\c\a\d\i\w\g\o\e\e\1\j\l\8\f\l\1\n\w\x\p\3\l\4\3\p\v\o\m\x\0\n\p\y\f\k\1\l\v\6\o\9\0\8\m\a\1\u\m\y\2\o\s\g\t\q\t\t\4\l\q\y\g\1\c\f\3\h\q\p\t\9\7\i\o\6\k\9\x\g\b\j\t\e\i\s\4\q\j\u\b\m\s\q\k\k\9\8\7\m\9\e\y\j\p\e\h\o\e\v\a\x\w\n\q\6\3\i\b\3\7\n\f\n\f\6\0\x\1\u\m\d\g\c\0\f\o\j\v\w\j\9\d\j\p\z\t\b\w\9\5\3\3\p\8\0\r\t\f\c\3\k\v\6\j\y\w\d\g\7\d\s\u\4\g\0\l\v\v\y\v\2\e\9\w\x\1\w\5\d\o\9\1\0\5\b\p\z\i\n\1\x\r\a\y\i\g\6\s\9\q\1\t\o\0\8\f\v\7\c\e\a\k\q\d\1\u\a\z\2\t\s\0\b\r\g\h\m\i\u\3\k\a\d\2\b\b\h\8\s\m\c\v\c\e\3\b\l\l\l\w\v\6\e\q\c\9\p\s\8\r\t\u\y\v\i\g\s\j\z\t\w\3\u\c\8\8\o\s\0\a\4\c\4\6\c\l\9\8\g\4\6\3\o\3\k\i\m\c\b\a\n\s\x\1\w\7\c\z\4\r\w\g\p\w\x\q\f\s\0\l\g\y\q\2\8\g\z\e\3\y\i\i\m\1\i\i\p\z\j\6\v\z\t\n\m\p\p\h\g\d\g\m\y\4\q\o\3\z\t\l\k\v\s\8\z\m\9\w\2\d\7\h\k\8\x\s\s\u\n\q\5\j\b\s\0\n\8\y\j\t\y\i\y\7\g\9\x\q\e\5\s\f\i\b\g\y\w\2\x\o\z\5\s\h\8\g\x\4\k\u\h\w\n\x\3\0\w\p\w\f\1\y\e\o\9\z\2\j\5\d\t\h\h\z\b\r\2\v\7\9\x\i\6\k\1\g\7\4\u\q\6\b\c\6\3\k\4\v\c\1\n\6\y\i\7\m\6\j\0\m\h\3\c\r\o\v\5\s\a\p\t\t\3\7\e\0\o\x\u\n\m\5\s\w\a\9\f\e\n\j\n\c\i\v\f\8\8\s\1\r\i\5\b\j\6\s\f\l\s\y\c\n\z\v\7\h\r\j\d\6\n\6\3\6\v\i\3\k\z\2\k\t\d\7\n\e\e\0\z\s\p\9\k\h\z\i\w\4\m\a\m\7\z\q\a\x\3\x\p\g\z\e\y\4\p\q\1\n\b\h\p\5\s\1\8\5\9\c\8\6\3\7\f\i\c\q\0\w\4\1\n\b\c\7\v\l\o\d\e\u\r\b\9\z\j\s\4\v\1\g\b\t\c\0\v\h\x\z\5\o\i\i\0\c\4\x\k\x\0\a\f\a\a\p\r\s\t\3\z\2\3\5\s\w\6\q\s\h\n\l\8\u\l\0\u\h\o\k\m\p\6\6\p\3\h\6\5\h\y\1\j\m\b\b\t\8\i\9\e\h\x\2\3\y\6\s\k\j\q\2\m\t\f\u\v\5\5\e\m\j\p\9\p\7\p\3\y\s\2\1\q\c\f\7\o\o\w\b\m\i\z\c\8\5\w\v\g\i\6\o\d\d\m\5\8\m\d\h\3\n\0\s\m\w\p\j\c\u\6\9\l\e\m\k\7\o\6\0\p\o\b\m\k\r\h\d\9\4\7\p\2\m\i\t\w\5\t\r\o\j\1\7\n\t\e\t\m\y\z\3\g\h\7\q\0\f\g\j\i\r\m\y\3\c\d\c\3\z\o\s\4\9\e\l\5\7\0\a\f\e\3\n\o\l\4\g\t\m\p\i\u\7\0\z\h\6\m\q\v\g\v\y\6\w\f\v\8\z\i\4\4\t\s\n\z\k\0\d\4\7\5\s\2\e\b\w\g\9\w\t\m\s\f\j\7\g\5\z\7\a\x\3\g\2\i\e\0\e\n\k\u\h\5\2\i\j\2\q\s\2\f\g\g\o\3\9\c\l\b\i\l\d\4\l\e\u\j\1\q\c\k\n\g\d\q\8\4\3\c\d\j\x\h\y\k\g\7\n\3\d\g\2\4\t\o\v\d\b\j\u\j\w\r\h\h\s\z\p\n\q\n\9\p\d\u\h\3\j\i\t\c\i\f\k\3\b\f\c\v\h\m\x\o\s\2\6\s\z\u\w\l\d\2\0\p\g\n\f\f\x\k\7\p\t\m\4\l\k\e\y\g\i\p\6\b\a\n\9\2\p\2\e\y\o\d\e\l\j\e\5\j\l\5\w\d\m\9\t\5\6\0\n\e\i\0\y\d\i\l\a\7\1\i\m\x\r\d\f\x\3\s\4\h\1\g\y\o\r\h\t\s\z\a\b\x\t\e\2\q\w\j\p\k\h\2\u\x\x\u\g\f\y\i\x\t\m\b\y\y\s\5\7\l\y\n\b\u\3\f\6\b\g\j\4\i\e\e\f\u\1\r\f\p\i\0\l\m\s\c\6\h\3\p\p\8\4\s\v\b\c\i\z\1\e\7\o\e\y\4\x\c\4\9\r\1\b\t\z\q\p\s\y\b\5\h\3\w\t\u\u\9\2\b\l\8\t\r\c\f\a\z\x\0\6\1\o\m\w\1\q\a\b\2\8\r\v\k\q\b\f\g\1\x\8\9\w\7\p\f\8\4\z\i\2\r\2\q\c\3\o\9\x\6\t\h\m\y\e\s\m\x\1\4\x\f\n\h\5\v\p\7\e\j\9\k\h\l\o\s\b\1\5\e\l\i\c\5\j\g\c\n\y\i\3\p\0\n\8\x\l\u\f\w\k\4\y\7\s\e\f\m\e\o\0\l\3\y\y\9\o\l\n\d\8\l\z\k\n\d\g\n\0\i\4\o\0\n\t\2\d\i\c\4\n\c\b\4\b\s\n\l\f\s\l\w\m\r\r\a\7\1\c\u\x\q\b\l\6\2\f\j\n\e\o\h\h\q\5\4\y\k\c\5\o\d\f\d\1\4\9\3\7\q\u\g\0\7\v\2\u\p\s\y\h\e\1\w\p\f\j\x\i\p\s\s\3\c\6\m\e\o\j\3\l\f\j\e\4\s\y\u\q\h\j\z\d\w\j\w\j\o\j\h\r\5\q\i\q\m\v\k\9\2\j\z\o\6\6\z\e\4\f\r\4\r\1\h\s\h\6\p\q\j\0\6\b\w\k\l\f\g\a\1\m\9\u\a\1\7\i\t\b\f\7\7\o\c\9\8\0\m\j\d\j\h\6\6\a\i\s\3\z\d\i\x\2\v\5\w\w\p\y\d\j\w\6\q\f\y\f\u\k\f\f\u\m\o\3\9\3\p\v\d\i\1\8\2\j\4\a\p\s\r\e\1\0\4\l\6\4\1\x\e\w\e\a\0\a\6\c\6\9\r\z\w\y\h\v\5\5\g\o\a\k\w\1\9\l\v\n\w\r\k\5\6\z\p\3\q\w\6\6\q\h\i\q\x\q\b\b\n\b\b\x\v\c\f\d\b\i\l\0\e\5\n\m\3\h\l\l\p\m\y\w\z\w\2\h\q\n\v\p\0\8\j\g\9\e\1\r\k\2\o\x\n\6\0\q\e\m\4\b\w\b\3\y\w\6\t\4\i\1\4\7\j\y\g\i\p\p\s\l\7\j\q\5\6\l\z\f\l\h\2\w\9\y\0\8\b\l\q\p\h\c\n\1\k\p\f\t\v\r\h\9\w\c\z\t\h\q\o\1\p\i\5\d\9\5\y\d\k\1\c\d\0\2\a\t\t\v\6\x\y\h\0\o\0\g\e\l\j\8\y\2\t\6\j\3\y\a\n\x\9\m\a\n\q\v\k\0\4\6\i\b\b\n\i\4\h\y\0\4\3\7\u\8\w\0\g\b\3\t\d\3\j\5\x\x\7\m\d\5\6\t\1\3\j\2\4\w\6\9\k\m\a\0\e\b\i\8\p\v\j\t\0\q\w\9\h\g\e\5\x\c\0\9\d\b\o\n\c\4\7\h\z\1\r\7\d\y\k\a\r\3\e\t\z\5\5\b\7\0\s\m\c\v\k\w\w\x\j\3\s\n\f\q\1\h\u\u\g\z\1\v\u\d\q\3\8\g\z\p\y\x\o\s\l\8\e\f\q\0\q\p\2\1\q\4\s\p\i\n\w\a\4\s\1\m\6\8\d\n\d\1\y\5\i\p\j\5\t\b\5\u\p\m\r\r\y\n\v\b\c\k\c\s\6\i\d\m\2\s\i\j\o\s\o\d\2\8\e\v\6\t\i\1\8\3\m\8\n\s\x\4\l\6\q\e\3\v\9\a\l\n\b\p\j\e\r\m\u\r\t\x\n\i\0\z\o\i\u\t\s\o\p\o\6\h\p\i\i\7\5\e\2\6\n\p\v\x\t\u\x\n\7\n\e\s\0\8\d\h\p\v\s\g\t\3\d\h\9\q\u\9\1\8\i\q\3\o\o\k\k\a\5\w\r\b\s\5\g\p\x\u\2\i\0\z\f\b\r\t\f\7\x\2\8\c\3\8\7\e\y\h\q\j\w\q\e\y\u\6\i\h\i\4\u\6\u\m\f\0\p\g\9\9\3\p\g\u\j\q\g\o\p\7\h\p\y\w\1\z\q\x\a\g\c\g\z\1\d\i\x\h\l\a\6\g\5\b\u\x\y\k\z\m\0\d\u\3\l\5\0\b\i\j\b\g\z\w\o\8\h\a\3\j\c\y\x\0\8\7\e\h\j\k\b\t\r\i\f\0\l\y\2\c\z\h\p\e\l\w\l\c\x\z\g\5\2\s\d\m\k\y\9\9\q\9\b\w\5\2\v\p\a\r\q\b\r\7\n\y\3\e\z\j\i\l\a\m\q\r\b\7\1\l\4\e\d\1\7\3\f\c\0\z\e\v\x\3\f\7\i\v\c\i\x\n\w\x\s\m\y\f\2\1\n\w\o\1\l\g\t\k\k\d\4\c\0\7\j\w\9\3\r\7\e\x\r\m\u\v\a\4\o\c\b\i\7\o\c\1\2\a\z\1\m\b\x\b\2\3\n\n\p\0\h\x\4\a\f\t\t\9\i\l\l\6\1\x\e\k\t\s\3\n\b\5\9\l\7\z\g\n\o\k\7\c\4\x\b\9\w\j\j\f\1\0\x\3\s\8\b\v\4\h\x\h\z\o\g\3\b\6\a\s\r\b\p\t\1\2\f\1\r\p\x\v\o\v\r\g\j\j\p\t\h\v\t\w\p\9\f\t\8\a\8\v\w\5\3\t\r\0\y\a ]] 00:26:31.818 00:26:31.818 real 0m1.840s 00:26:31.818 user 0m1.135s 00:26:31.818 sys 0m0.562s 00:26:31.818 01:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:31.818 01:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:31.818 ************************************ 00:26:31.818 END TEST dd_rw_offset 00:26:31.818 ************************************ 00:26:31.818 01:09:06 -- dd/basic_rw.sh@1 -- # cleanup 00:26:31.818 01:09:06 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:26:31.818 01:09:06 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:31.818 01:09:06 -- dd/common.sh@11 -- # local nvme_ref= 00:26:31.818 01:09:06 -- dd/common.sh@12 -- # local size=0xffff 00:26:31.818 01:09:06 -- dd/common.sh@14 -- # local bs=1048576 00:26:31.818 01:09:06 -- dd/common.sh@15 -- # local count=1 00:26:31.818 01:09:06 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:31.818 01:09:06 -- dd/common.sh@18 -- # gen_conf 00:26:31.818 01:09:06 -- dd/common.sh@31 -- # xtrace_disable 00:26:31.818 01:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:31.818 { 00:26:31.818 "subsystems": [ 00:26:31.818 { 00:26:31.818 "subsystem": "bdev", 00:26:31.818 "config": [ 00:26:31.818 { 00:26:31.818 "params": { 00:26:31.818 "trtype": "pcie", 00:26:31.818 "traddr": "0000:00:06.0", 00:26:31.818 "name": "Nvme0" 00:26:31.818 }, 00:26:31.818 "method": "bdev_nvme_attach_controller" 00:26:31.818 }, 00:26:31.818 { 00:26:31.818 "method": "bdev_wait_for_examine" 00:26:31.818 } 00:26:31.818 ] 00:26:31.818 } 00:26:31.818 ] 00:26:31.818 } 00:26:31.818 [2024-11-18 01:09:06.080539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:31.818 [2024-11-18 01:09:06.081194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144239 ] 00:26:32.077 [2024-11-18 01:09:06.236447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.077 [2024-11-18 01:09:06.304858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.336  [2024-11-18T01:09:06.994Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:26:32.595 00:26:32.595 01:09:06 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:32.595 ************************************ 00:26:32.595 END TEST spdk_dd_basic_rw 00:26:32.595 ************************************ 00:26:32.595 00:26:32.595 real 0m23.852s 00:26:32.595 user 0m15.171s 00:26:32.595 sys 0m6.934s 00:26:32.595 01:09:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:32.595 01:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.595 01:09:06 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:32.595 01:09:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:32.595 01:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:32.595 01:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.595 ************************************ 00:26:32.595 START TEST spdk_dd_posix 00:26:32.595 ************************************ 00:26:32.595 01:09:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:32.854 * Looking for test storage... 00:26:32.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:32.854 01:09:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:32.854 01:09:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:32.854 01:09:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:32.854 01:09:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:32.854 01:09:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:32.854 01:09:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:32.854 01:09:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:32.854 01:09:07 -- scripts/common.sh@335 -- # IFS=.-: 00:26:32.854 01:09:07 -- scripts/common.sh@335 -- # read -ra ver1 00:26:32.854 01:09:07 -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.854 01:09:07 -- scripts/common.sh@336 -- # read -ra ver2 00:26:32.854 01:09:07 -- scripts/common.sh@337 -- # local 'op=<' 00:26:32.854 01:09:07 -- scripts/common.sh@339 -- # ver1_l=2 00:26:32.854 01:09:07 -- scripts/common.sh@340 -- # ver2_l=1 00:26:32.854 01:09:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:32.854 01:09:07 -- scripts/common.sh@343 -- # case "$op" in 00:26:32.854 01:09:07 -- scripts/common.sh@344 -- # : 1 00:26:32.854 01:09:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:32.854 01:09:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.854 01:09:07 -- scripts/common.sh@364 -- # decimal 1 00:26:32.854 01:09:07 -- scripts/common.sh@352 -- # local d=1 00:26:32.854 01:09:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.854 01:09:07 -- scripts/common.sh@354 -- # echo 1 00:26:32.854 01:09:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:32.854 01:09:07 -- scripts/common.sh@365 -- # decimal 2 00:26:32.854 01:09:07 -- scripts/common.sh@352 -- # local d=2 00:26:32.854 01:09:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.854 01:09:07 -- scripts/common.sh@354 -- # echo 2 00:26:32.854 01:09:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:32.854 01:09:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:32.854 01:09:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:32.854 01:09:07 -- scripts/common.sh@367 -- # return 0 00:26:32.854 01:09:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.854 01:09:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.854 --rc genhtml_branch_coverage=1 00:26:32.854 --rc genhtml_function_coverage=1 00:26:32.854 --rc genhtml_legend=1 00:26:32.854 --rc geninfo_all_blocks=1 00:26:32.854 --rc geninfo_unexecuted_blocks=1 00:26:32.854 00:26:32.854 ' 00:26:32.854 01:09:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.854 --rc genhtml_branch_coverage=1 00:26:32.854 --rc genhtml_function_coverage=1 00:26:32.854 --rc genhtml_legend=1 00:26:32.854 --rc geninfo_all_blocks=1 00:26:32.854 --rc geninfo_unexecuted_blocks=1 00:26:32.854 00:26:32.854 ' 00:26:32.854 01:09:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.854 --rc genhtml_branch_coverage=1 00:26:32.854 --rc genhtml_function_coverage=1 00:26:32.854 --rc genhtml_legend=1 00:26:32.854 --rc geninfo_all_blocks=1 00:26:32.854 --rc geninfo_unexecuted_blocks=1 00:26:32.854 00:26:32.854 ' 00:26:32.854 01:09:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.854 --rc genhtml_branch_coverage=1 00:26:32.854 --rc genhtml_function_coverage=1 00:26:32.854 --rc genhtml_legend=1 00:26:32.854 --rc geninfo_all_blocks=1 00:26:32.854 --rc geninfo_unexecuted_blocks=1 00:26:32.854 00:26:32.854 ' 00:26:32.854 01:09:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:32.854 01:09:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.854 01:09:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.854 01:09:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.854 01:09:07 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:32.854 01:09:07 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:32.854 01:09:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:32.854 01:09:07 -- paths/export.sh@5 -- # export PATH 00:26:32.854 01:09:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:32.854 01:09:07 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:26:32.854 01:09:07 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:26:32.854 01:09:07 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:26:32.854 01:09:07 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:26:32.854 01:09:07 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:32.854 01:09:07 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:32.854 01:09:07 -- dd/posix.sh@130 -- # tests 00:26:32.854 01:09:07 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:26:32.854 * First test run, using AIO 00:26:32.854 01:09:07 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:26:32.854 01:09:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:32.854 01:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:32.854 01:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:32.854 ************************************ 00:26:32.854 START TEST dd_flag_append 00:26:32.854 ************************************ 00:26:32.854 01:09:07 -- common/autotest_common.sh@1114 -- # append 00:26:32.854 01:09:07 -- dd/posix.sh@16 -- # local dump0 00:26:32.854 01:09:07 -- dd/posix.sh@17 -- # local dump1 00:26:32.854 01:09:07 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:32.854 01:09:07 -- dd/common.sh@98 -- # xtrace_disable 00:26:32.854 01:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 01:09:07 -- dd/posix.sh@19 -- # dump0=lhxqjrgk0y2h9ipzhfud38qv6hbmgub2 00:26:32.855 01:09:07 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:32.855 01:09:07 -- dd/common.sh@98 -- # xtrace_disable 00:26:32.855 01:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 01:09:07 -- dd/posix.sh@20 -- # dump1=c75xub1d0rt8q94hd50vnpj1mkgotbi3 00:26:32.855 01:09:07 -- dd/posix.sh@22 -- # printf %s lhxqjrgk0y2h9ipzhfud38qv6hbmgub2 00:26:32.855 01:09:07 -- dd/posix.sh@23 -- # printf %s c75xub1d0rt8q94hd50vnpj1mkgotbi3 00:26:32.855 01:09:07 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:33.114 [2024-11-18 01:09:07.305796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:33.114 [2024-11-18 01:09:07.306294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144315 ] 00:26:33.114 [2024-11-18 01:09:07.460635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.373 [2024-11-18 01:09:07.530107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.373  [2024-11-18T01:09:08.339Z] Copying: 32/32 [B] (average 31 kBps) 00:26:33.940 00:26:33.940 ************************************ 00:26:33.940 END TEST dd_flag_append 00:26:33.940 ************************************ 00:26:33.940 01:09:08 -- dd/posix.sh@27 -- # [[ c75xub1d0rt8q94hd50vnpj1mkgotbi3lhxqjrgk0y2h9ipzhfud38qv6hbmgub2 == \c\7\5\x\u\b\1\d\0\r\t\8\q\9\4\h\d\5\0\v\n\p\j\1\m\k\g\o\t\b\i\3\l\h\x\q\j\r\g\k\0\y\2\h\9\i\p\z\h\f\u\d\3\8\q\v\6\h\b\m\g\u\b\2 ]] 00:26:33.940 00:26:33.940 real 0m0.842s 00:26:33.940 user 0m0.410s 00:26:33.941 sys 0m0.297s 00:26:33.941 01:09:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:33.941 01:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:33.941 01:09:08 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:26:33.941 01:09:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:33.941 01:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:33.941 01:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:33.941 ************************************ 00:26:33.941 START TEST dd_flag_directory 00:26:33.941 ************************************ 00:26:33.941 01:09:08 -- common/autotest_common.sh@1114 -- # directory 00:26:33.941 01:09:08 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:33.941 01:09:08 -- common/autotest_common.sh@650 -- # local es=0 00:26:33.941 01:09:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:33.941 01:09:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:33.941 01:09:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:33.941 01:09:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:33.941 01:09:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:33.941 01:09:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:33.941 01:09:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:33.941 01:09:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:33.941 01:09:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:33.941 01:09:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:33.941 [2024-11-18 01:09:08.213335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:33.941 [2024-11-18 01:09:08.213808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144355 ] 00:26:34.200 [2024-11-18 01:09:08.368452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.200 [2024-11-18 01:09:08.435193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.200 [2024-11-18 01:09:08.551235] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:34.200 [2024-11-18 01:09:08.551587] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:34.200 [2024-11-18 01:09:08.551657] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:34.459 [2024-11-18 01:09:08.733637] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:34.718 01:09:08 -- common/autotest_common.sh@653 -- # es=236 00:26:34.718 01:09:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:34.718 01:09:08 -- common/autotest_common.sh@662 -- # es=108 00:26:34.718 01:09:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:34.718 01:09:08 -- common/autotest_common.sh@670 -- # es=1 00:26:34.718 01:09:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:34.718 01:09:08 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:34.718 01:09:08 -- common/autotest_common.sh@650 -- # local es=0 00:26:34.718 01:09:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:34.718 01:09:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:34.718 01:09:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:34.718 01:09:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:34.718 01:09:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:34.718 01:09:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:34.718 01:09:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:34.718 01:09:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:34.718 01:09:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:34.718 01:09:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:34.718 [2024-11-18 01:09:09.019960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:34.718 [2024-11-18 01:09:09.020397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144367 ] 00:26:34.975 [2024-11-18 01:09:09.176108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.975 [2024-11-18 01:09:09.244153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.975 [2024-11-18 01:09:09.360459] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:34.975 [2024-11-18 01:09:09.360786] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:34.975 [2024-11-18 01:09:09.360863] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:35.233 [2024-11-18 01:09:09.542373] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:35.493 01:09:09 -- common/autotest_common.sh@653 -- # es=236 00:26:35.493 01:09:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:35.493 01:09:09 -- common/autotest_common.sh@662 -- # es=108 00:26:35.493 01:09:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:35.493 01:09:09 -- common/autotest_common.sh@670 -- # es=1 00:26:35.493 01:09:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:35.493 00:26:35.493 real 0m1.608s 00:26:35.493 user 0m0.853s 00:26:35.493 sys 0m0.548s 00:26:35.493 01:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:35.493 01:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:35.493 ************************************ 00:26:35.493 END TEST dd_flag_directory 00:26:35.493 ************************************ 00:26:35.493 01:09:09 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:26:35.493 01:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:35.493 01:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:35.493 01:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:35.493 ************************************ 00:26:35.493 START TEST dd_flag_nofollow 00:26:35.493 ************************************ 00:26:35.493 01:09:09 -- common/autotest_common.sh@1114 -- # nofollow 00:26:35.493 01:09:09 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:35.493 01:09:09 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:35.493 01:09:09 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:35.493 01:09:09 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:35.493 01:09:09 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:35.493 01:09:09 -- common/autotest_common.sh@650 -- # local es=0 00:26:35.493 01:09:09 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:35.493 01:09:09 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.493 01:09:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.493 01:09:09 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.493 01:09:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.493 01:09:09 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.493 01:09:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.493 01:09:09 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.493 01:09:09 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:35.493 01:09:09 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:35.753 [2024-11-18 01:09:09.900923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:35.753 [2024-11-18 01:09:09.901378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144408 ] 00:26:35.753 [2024-11-18 01:09:10.056258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.753 [2024-11-18 01:09:10.124402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.012 [2024-11-18 01:09:10.239572] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:36.012 [2024-11-18 01:09:10.239909] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:36.012 [2024-11-18 01:09:10.240004] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:36.271 [2024-11-18 01:09:10.421899] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:36.271 01:09:10 -- common/autotest_common.sh@653 -- # es=216 00:26:36.271 01:09:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.271 01:09:10 -- common/autotest_common.sh@662 -- # es=88 00:26:36.271 01:09:10 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:36.271 01:09:10 -- common/autotest_common.sh@670 -- # es=1 00:26:36.271 01:09:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.271 01:09:10 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:36.271 01:09:10 -- common/autotest_common.sh@650 -- # local es=0 00:26:36.271 01:09:10 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:36.271 01:09:10 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:36.271 01:09:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.271 01:09:10 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:36.271 01:09:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.271 01:09:10 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:36.271 01:09:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.271 01:09:10 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:36.271 01:09:10 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:36.271 01:09:10 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:36.530 [2024-11-18 01:09:10.700406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:36.530 [2024-11-18 01:09:10.700905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144423 ] 00:26:36.530 [2024-11-18 01:09:10.855840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.530 [2024-11-18 01:09:10.924237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.789 [2024-11-18 01:09:11.040708] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:36.789 [2024-11-18 01:09:11.041034] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:36.789 [2024-11-18 01:09:11.041123] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:37.049 [2024-11-18 01:09:11.223852] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:37.049 01:09:11 -- common/autotest_common.sh@653 -- # es=216 00:26:37.049 01:09:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:37.049 01:09:11 -- common/autotest_common.sh@662 -- # es=88 00:26:37.049 01:09:11 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:37.049 01:09:11 -- common/autotest_common.sh@670 -- # es=1 00:26:37.049 01:09:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:37.049 01:09:11 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:37.049 01:09:11 -- dd/common.sh@98 -- # xtrace_disable 00:26:37.049 01:09:11 -- common/autotest_common.sh@10 -- # set +x 00:26:37.049 01:09:11 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:37.308 [2024-11-18 01:09:11.488154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:37.308 [2024-11-18 01:09:11.488687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144438 ] 00:26:37.308 [2024-11-18 01:09:11.644149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.567 [2024-11-18 01:09:11.712392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.567  [2024-11-18T01:09:12.226Z] Copying: 512/512 [B] (average 500 kBps) 00:26:37.827 00:26:38.087 ************************************ 00:26:38.087 END TEST dd_flag_nofollow 00:26:38.087 ************************************ 00:26:38.087 01:09:12 -- dd/posix.sh@49 -- # [[ l5zklxyz6vymr1vesr0sdplz8z6fxc5qomlllihf915x61zigk6toijft05xpss9q6gdczdvr9omhxz38u89etnogdequ43oetnx2fa30htgcu7hs76i3kz5g8tumlxax8rta44hm7sto0ltvg7rdcuckc4qrqeojcqiacdpwlort2klw2tuc50k7kqm2uappmozdbhqgzp5q939mdkfvwduivpo8bl6v4yf4nkwh3ccmxmhn475anenp9zrsfvh4h1myvg5zs96bnm8tj091gxqoc81yu9epup74japk402f9gixscfvb1r7s000qry7b6nbciaj5dcnuwklwwjo98ui99fbdbzisd46s5k3lvu9bbzofxw3akv0ojg5r6ucdd01ls9rtmsb29crw6n3er0jkq4x0n6wnqexq3qleceszm9gqv6u0v3lxpc3ouk62musqtijp19xgirkqloncmonccc6xc7099cja1slt8w1ykfcfpo4ptysxbjqf1z == \l\5\z\k\l\x\y\z\6\v\y\m\r\1\v\e\s\r\0\s\d\p\l\z\8\z\6\f\x\c\5\q\o\m\l\l\l\i\h\f\9\1\5\x\6\1\z\i\g\k\6\t\o\i\j\f\t\0\5\x\p\s\s\9\q\6\g\d\c\z\d\v\r\9\o\m\h\x\z\3\8\u\8\9\e\t\n\o\g\d\e\q\u\4\3\o\e\t\n\x\2\f\a\3\0\h\t\g\c\u\7\h\s\7\6\i\3\k\z\5\g\8\t\u\m\l\x\a\x\8\r\t\a\4\4\h\m\7\s\t\o\0\l\t\v\g\7\r\d\c\u\c\k\c\4\q\r\q\e\o\j\c\q\i\a\c\d\p\w\l\o\r\t\2\k\l\w\2\t\u\c\5\0\k\7\k\q\m\2\u\a\p\p\m\o\z\d\b\h\q\g\z\p\5\q\9\3\9\m\d\k\f\v\w\d\u\i\v\p\o\8\b\l\6\v\4\y\f\4\n\k\w\h\3\c\c\m\x\m\h\n\4\7\5\a\n\e\n\p\9\z\r\s\f\v\h\4\h\1\m\y\v\g\5\z\s\9\6\b\n\m\8\t\j\0\9\1\g\x\q\o\c\8\1\y\u\9\e\p\u\p\7\4\j\a\p\k\4\0\2\f\9\g\i\x\s\c\f\v\b\1\r\7\s\0\0\0\q\r\y\7\b\6\n\b\c\i\a\j\5\d\c\n\u\w\k\l\w\w\j\o\9\8\u\i\9\9\f\b\d\b\z\i\s\d\4\6\s\5\k\3\l\v\u\9\b\b\z\o\f\x\w\3\a\k\v\0\o\j\g\5\r\6\u\c\d\d\0\1\l\s\9\r\t\m\s\b\2\9\c\r\w\6\n\3\e\r\0\j\k\q\4\x\0\n\6\w\n\q\e\x\q\3\q\l\e\c\e\s\z\m\9\g\q\v\6\u\0\v\3\l\x\p\c\3\o\u\k\6\2\m\u\s\q\t\i\j\p\1\9\x\g\i\r\k\q\l\o\n\c\m\o\n\c\c\c\6\x\c\7\0\9\9\c\j\a\1\s\l\t\8\w\1\y\k\f\c\f\p\o\4\p\t\y\s\x\b\j\q\f\1\z ]] 00:26:38.087 00:26:38.087 real 0m2.428s 00:26:38.087 user 0m1.343s 00:26:38.087 sys 0m0.751s 00:26:38.087 01:09:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:38.087 01:09:12 -- common/autotest_common.sh@10 -- # set +x 00:26:38.087 01:09:12 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:26:38.087 01:09:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:38.087 01:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:38.087 01:09:12 -- common/autotest_common.sh@10 -- # set +x 00:26:38.087 ************************************ 00:26:38.087 START TEST dd_flag_noatime 00:26:38.087 ************************************ 00:26:38.087 01:09:12 -- common/autotest_common.sh@1114 -- # noatime 00:26:38.087 01:09:12 -- dd/posix.sh@53 -- # local atime_if 00:26:38.087 01:09:12 -- dd/posix.sh@54 -- # local atime_of 00:26:38.087 01:09:12 -- dd/posix.sh@58 -- # gen_bytes 512 00:26:38.087 01:09:12 -- dd/common.sh@98 -- # xtrace_disable 00:26:38.087 01:09:12 -- common/autotest_common.sh@10 -- # set +x 00:26:38.087 01:09:12 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:38.087 01:09:12 -- dd/posix.sh@60 -- # atime_if=1731892151 00:26:38.087 01:09:12 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:38.087 01:09:12 -- dd/posix.sh@61 -- # atime_of=1731892152 00:26:38.087 01:09:12 -- dd/posix.sh@66 -- # sleep 1 00:26:39.024 01:09:13 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:39.024 [2024-11-18 01:09:13.398815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:39.024 [2024-11-18 01:09:13.399357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144490 ] 00:26:39.282 [2024-11-18 01:09:13.553797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.282 [2024-11-18 01:09:13.632526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.542  [2024-11-18T01:09:14.200Z] Copying: 512/512 [B] (average 500 kBps) 00:26:39.801 00:26:39.801 01:09:14 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:39.801 01:09:14 -- dd/posix.sh@69 -- # (( atime_if == 1731892151 )) 00:26:39.801 01:09:14 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:39.801 01:09:14 -- dd/posix.sh@70 -- # (( atime_of == 1731892152 )) 00:26:39.801 01:09:14 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:40.059 [2024-11-18 01:09:14.255673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:40.059 [2024-11-18 01:09:14.256203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144509 ] 00:26:40.059 [2024-11-18 01:09:14.411569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.318 [2024-11-18 01:09:14.487003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.318  [2024-11-18T01:09:15.286Z] Copying: 512/512 [B] (average 500 kBps) 00:26:40.887 00:26:40.887 01:09:15 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:40.887 01:09:15 -- dd/posix.sh@73 -- # (( atime_if < 1731892154 )) 00:26:40.887 00:26:40.887 real 0m2.734s 00:26:40.887 user 0m0.901s 00:26:40.887 sys 0m0.550s 00:26:40.887 01:09:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:40.887 01:09:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.887 ************************************ 00:26:40.887 END TEST dd_flag_noatime 00:26:40.887 ************************************ 00:26:40.887 01:09:15 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:26:40.887 01:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:40.887 01:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.887 01:09:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.887 ************************************ 00:26:40.887 START TEST dd_flags_misc 00:26:40.887 ************************************ 00:26:40.887 01:09:15 -- common/autotest_common.sh@1114 -- # io 00:26:40.887 01:09:15 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:40.887 01:09:15 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:40.887 01:09:15 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:40.887 01:09:15 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:40.887 01:09:15 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:40.887 01:09:15 -- dd/common.sh@98 -- # xtrace_disable 00:26:40.887 01:09:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.887 01:09:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:40.887 01:09:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:40.887 [2024-11-18 01:09:15.178403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:40.887 [2024-11-18 01:09:15.178668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144538 ] 00:26:41.147 [2024-11-18 01:09:15.335649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.147 [2024-11-18 01:09:15.402551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.147  [2024-11-18T01:09:16.118Z] Copying: 512/512 [B] (average 500 kBps) 00:26:41.719 00:26:41.719 01:09:15 -- dd/posix.sh@93 -- # [[ 5ety2eq00nhsu3uajt3hd0x2jw2s35c9kk94hz15x0wly2kke0plh5a2ao9mw38dfqa9ebo36ntmmu4otl4mwosbg7o99rd2fbo9t0j2cg1txlff3gju87b3njj4cdn58z3zt8kzho0plgaei81f6nj01myltodv5sy0uww1jjbabk7i8mihbu9wzvzzn4prw91em82qiidjbh0yyv5qz7tsictpab5gq0f167g4kqkcraq2a5qcov1ua9waotxf75dcpruhia7nint5p3wsoi5c78h9y72z0r94w1nvwfyy1awoq1x6is0jw5w7wi1btyci5c3bagdeg83361yiz7jtd9hqvwggky8h3eoh152ojvu14lofgak9am7ddk3eobg9wa9crnjah02tn8en2neaoruwfccd0lwebqlfztfxp2n8s5h5ryehjhdr17o5kw0jrklzhmsjy3nczhsuftqjsn5xhtil41s4h1i621sthgc5o2fzvgfabuavbp5e == \5\e\t\y\2\e\q\0\0\n\h\s\u\3\u\a\j\t\3\h\d\0\x\2\j\w\2\s\3\5\c\9\k\k\9\4\h\z\1\5\x\0\w\l\y\2\k\k\e\0\p\l\h\5\a\2\a\o\9\m\w\3\8\d\f\q\a\9\e\b\o\3\6\n\t\m\m\u\4\o\t\l\4\m\w\o\s\b\g\7\o\9\9\r\d\2\f\b\o\9\t\0\j\2\c\g\1\t\x\l\f\f\3\g\j\u\8\7\b\3\n\j\j\4\c\d\n\5\8\z\3\z\t\8\k\z\h\o\0\p\l\g\a\e\i\8\1\f\6\n\j\0\1\m\y\l\t\o\d\v\5\s\y\0\u\w\w\1\j\j\b\a\b\k\7\i\8\m\i\h\b\u\9\w\z\v\z\z\n\4\p\r\w\9\1\e\m\8\2\q\i\i\d\j\b\h\0\y\y\v\5\q\z\7\t\s\i\c\t\p\a\b\5\g\q\0\f\1\6\7\g\4\k\q\k\c\r\a\q\2\a\5\q\c\o\v\1\u\a\9\w\a\o\t\x\f\7\5\d\c\p\r\u\h\i\a\7\n\i\n\t\5\p\3\w\s\o\i\5\c\7\8\h\9\y\7\2\z\0\r\9\4\w\1\n\v\w\f\y\y\1\a\w\o\q\1\x\6\i\s\0\j\w\5\w\7\w\i\1\b\t\y\c\i\5\c\3\b\a\g\d\e\g\8\3\3\6\1\y\i\z\7\j\t\d\9\h\q\v\w\g\g\k\y\8\h\3\e\o\h\1\5\2\o\j\v\u\1\4\l\o\f\g\a\k\9\a\m\7\d\d\k\3\e\o\b\g\9\w\a\9\c\r\n\j\a\h\0\2\t\n\8\e\n\2\n\e\a\o\r\u\w\f\c\c\d\0\l\w\e\b\q\l\f\z\t\f\x\p\2\n\8\s\5\h\5\r\y\e\h\j\h\d\r\1\7\o\5\k\w\0\j\r\k\l\z\h\m\s\j\y\3\n\c\z\h\s\u\f\t\q\j\s\n\5\x\h\t\i\l\4\1\s\4\h\1\i\6\2\1\s\t\h\g\c\5\o\2\f\z\v\g\f\a\b\u\a\v\b\p\5\e ]] 00:26:41.719 01:09:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:41.719 01:09:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:41.719 [2024-11-18 01:09:16.001261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:41.719 [2024-11-18 01:09:16.002102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144552 ] 00:26:41.978 [2024-11-18 01:09:16.156227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.978 [2024-11-18 01:09:16.228881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.978  [2024-11-18T01:09:16.946Z] Copying: 512/512 [B] (average 500 kBps) 00:26:42.547 00:26:42.547 01:09:16 -- dd/posix.sh@93 -- # [[ 5ety2eq00nhsu3uajt3hd0x2jw2s35c9kk94hz15x0wly2kke0plh5a2ao9mw38dfqa9ebo36ntmmu4otl4mwosbg7o99rd2fbo9t0j2cg1txlff3gju87b3njj4cdn58z3zt8kzho0plgaei81f6nj01myltodv5sy0uww1jjbabk7i8mihbu9wzvzzn4prw91em82qiidjbh0yyv5qz7tsictpab5gq0f167g4kqkcraq2a5qcov1ua9waotxf75dcpruhia7nint5p3wsoi5c78h9y72z0r94w1nvwfyy1awoq1x6is0jw5w7wi1btyci5c3bagdeg83361yiz7jtd9hqvwggky8h3eoh152ojvu14lofgak9am7ddk3eobg9wa9crnjah02tn8en2neaoruwfccd0lwebqlfztfxp2n8s5h5ryehjhdr17o5kw0jrklzhmsjy3nczhsuftqjsn5xhtil41s4h1i621sthgc5o2fzvgfabuavbp5e == \5\e\t\y\2\e\q\0\0\n\h\s\u\3\u\a\j\t\3\h\d\0\x\2\j\w\2\s\3\5\c\9\k\k\9\4\h\z\1\5\x\0\w\l\y\2\k\k\e\0\p\l\h\5\a\2\a\o\9\m\w\3\8\d\f\q\a\9\e\b\o\3\6\n\t\m\m\u\4\o\t\l\4\m\w\o\s\b\g\7\o\9\9\r\d\2\f\b\o\9\t\0\j\2\c\g\1\t\x\l\f\f\3\g\j\u\8\7\b\3\n\j\j\4\c\d\n\5\8\z\3\z\t\8\k\z\h\o\0\p\l\g\a\e\i\8\1\f\6\n\j\0\1\m\y\l\t\o\d\v\5\s\y\0\u\w\w\1\j\j\b\a\b\k\7\i\8\m\i\h\b\u\9\w\z\v\z\z\n\4\p\r\w\9\1\e\m\8\2\q\i\i\d\j\b\h\0\y\y\v\5\q\z\7\t\s\i\c\t\p\a\b\5\g\q\0\f\1\6\7\g\4\k\q\k\c\r\a\q\2\a\5\q\c\o\v\1\u\a\9\w\a\o\t\x\f\7\5\d\c\p\r\u\h\i\a\7\n\i\n\t\5\p\3\w\s\o\i\5\c\7\8\h\9\y\7\2\z\0\r\9\4\w\1\n\v\w\f\y\y\1\a\w\o\q\1\x\6\i\s\0\j\w\5\w\7\w\i\1\b\t\y\c\i\5\c\3\b\a\g\d\e\g\8\3\3\6\1\y\i\z\7\j\t\d\9\h\q\v\w\g\g\k\y\8\h\3\e\o\h\1\5\2\o\j\v\u\1\4\l\o\f\g\a\k\9\a\m\7\d\d\k\3\e\o\b\g\9\w\a\9\c\r\n\j\a\h\0\2\t\n\8\e\n\2\n\e\a\o\r\u\w\f\c\c\d\0\l\w\e\b\q\l\f\z\t\f\x\p\2\n\8\s\5\h\5\r\y\e\h\j\h\d\r\1\7\o\5\k\w\0\j\r\k\l\z\h\m\s\j\y\3\n\c\z\h\s\u\f\t\q\j\s\n\5\x\h\t\i\l\4\1\s\4\h\1\i\6\2\1\s\t\h\g\c\5\o\2\f\z\v\g\f\a\b\u\a\v\b\p\5\e ]] 00:26:42.547 01:09:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:42.547 01:09:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:42.547 [2024-11-18 01:09:16.846083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:42.547 [2024-11-18 01:09:16.846369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144572 ] 00:26:42.806 [2024-11-18 01:09:17.000459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.806 [2024-11-18 01:09:17.067386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.806  [2024-11-18T01:09:17.782Z] Copying: 512/512 [B] (average 166 kBps) 00:26:43.383 00:26:43.383 01:09:17 -- dd/posix.sh@93 -- # [[ 5ety2eq00nhsu3uajt3hd0x2jw2s35c9kk94hz15x0wly2kke0plh5a2ao9mw38dfqa9ebo36ntmmu4otl4mwosbg7o99rd2fbo9t0j2cg1txlff3gju87b3njj4cdn58z3zt8kzho0plgaei81f6nj01myltodv5sy0uww1jjbabk7i8mihbu9wzvzzn4prw91em82qiidjbh0yyv5qz7tsictpab5gq0f167g4kqkcraq2a5qcov1ua9waotxf75dcpruhia7nint5p3wsoi5c78h9y72z0r94w1nvwfyy1awoq1x6is0jw5w7wi1btyci5c3bagdeg83361yiz7jtd9hqvwggky8h3eoh152ojvu14lofgak9am7ddk3eobg9wa9crnjah02tn8en2neaoruwfccd0lwebqlfztfxp2n8s5h5ryehjhdr17o5kw0jrklzhmsjy3nczhsuftqjsn5xhtil41s4h1i621sthgc5o2fzvgfabuavbp5e == \5\e\t\y\2\e\q\0\0\n\h\s\u\3\u\a\j\t\3\h\d\0\x\2\j\w\2\s\3\5\c\9\k\k\9\4\h\z\1\5\x\0\w\l\y\2\k\k\e\0\p\l\h\5\a\2\a\o\9\m\w\3\8\d\f\q\a\9\e\b\o\3\6\n\t\m\m\u\4\o\t\l\4\m\w\o\s\b\g\7\o\9\9\r\d\2\f\b\o\9\t\0\j\2\c\g\1\t\x\l\f\f\3\g\j\u\8\7\b\3\n\j\j\4\c\d\n\5\8\z\3\z\t\8\k\z\h\o\0\p\l\g\a\e\i\8\1\f\6\n\j\0\1\m\y\l\t\o\d\v\5\s\y\0\u\w\w\1\j\j\b\a\b\k\7\i\8\m\i\h\b\u\9\w\z\v\z\z\n\4\p\r\w\9\1\e\m\8\2\q\i\i\d\j\b\h\0\y\y\v\5\q\z\7\t\s\i\c\t\p\a\b\5\g\q\0\f\1\6\7\g\4\k\q\k\c\r\a\q\2\a\5\q\c\o\v\1\u\a\9\w\a\o\t\x\f\7\5\d\c\p\r\u\h\i\a\7\n\i\n\t\5\p\3\w\s\o\i\5\c\7\8\h\9\y\7\2\z\0\r\9\4\w\1\n\v\w\f\y\y\1\a\w\o\q\1\x\6\i\s\0\j\w\5\w\7\w\i\1\b\t\y\c\i\5\c\3\b\a\g\d\e\g\8\3\3\6\1\y\i\z\7\j\t\d\9\h\q\v\w\g\g\k\y\8\h\3\e\o\h\1\5\2\o\j\v\u\1\4\l\o\f\g\a\k\9\a\m\7\d\d\k\3\e\o\b\g\9\w\a\9\c\r\n\j\a\h\0\2\t\n\8\e\n\2\n\e\a\o\r\u\w\f\c\c\d\0\l\w\e\b\q\l\f\z\t\f\x\p\2\n\8\s\5\h\5\r\y\e\h\j\h\d\r\1\7\o\5\k\w\0\j\r\k\l\z\h\m\s\j\y\3\n\c\z\h\s\u\f\t\q\j\s\n\5\x\h\t\i\l\4\1\s\4\h\1\i\6\2\1\s\t\h\g\c\5\o\2\f\z\v\g\f\a\b\u\a\v\b\p\5\e ]] 00:26:43.383 01:09:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:43.383 01:09:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:43.383 [2024-11-18 01:09:17.687620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:43.383 [2024-11-18 01:09:17.688196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144591 ] 00:26:43.642 [2024-11-18 01:09:17.845321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.642 [2024-11-18 01:09:17.926443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.902  [2024-11-18T01:09:18.560Z] Copying: 512/512 [B] (average 166 kBps) 00:26:44.161 00:26:44.161 01:09:18 -- dd/posix.sh@93 -- # [[ 5ety2eq00nhsu3uajt3hd0x2jw2s35c9kk94hz15x0wly2kke0plh5a2ao9mw38dfqa9ebo36ntmmu4otl4mwosbg7o99rd2fbo9t0j2cg1txlff3gju87b3njj4cdn58z3zt8kzho0plgaei81f6nj01myltodv5sy0uww1jjbabk7i8mihbu9wzvzzn4prw91em82qiidjbh0yyv5qz7tsictpab5gq0f167g4kqkcraq2a5qcov1ua9waotxf75dcpruhia7nint5p3wsoi5c78h9y72z0r94w1nvwfyy1awoq1x6is0jw5w7wi1btyci5c3bagdeg83361yiz7jtd9hqvwggky8h3eoh152ojvu14lofgak9am7ddk3eobg9wa9crnjah02tn8en2neaoruwfccd0lwebqlfztfxp2n8s5h5ryehjhdr17o5kw0jrklzhmsjy3nczhsuftqjsn5xhtil41s4h1i621sthgc5o2fzvgfabuavbp5e == \5\e\t\y\2\e\q\0\0\n\h\s\u\3\u\a\j\t\3\h\d\0\x\2\j\w\2\s\3\5\c\9\k\k\9\4\h\z\1\5\x\0\w\l\y\2\k\k\e\0\p\l\h\5\a\2\a\o\9\m\w\3\8\d\f\q\a\9\e\b\o\3\6\n\t\m\m\u\4\o\t\l\4\m\w\o\s\b\g\7\o\9\9\r\d\2\f\b\o\9\t\0\j\2\c\g\1\t\x\l\f\f\3\g\j\u\8\7\b\3\n\j\j\4\c\d\n\5\8\z\3\z\t\8\k\z\h\o\0\p\l\g\a\e\i\8\1\f\6\n\j\0\1\m\y\l\t\o\d\v\5\s\y\0\u\w\w\1\j\j\b\a\b\k\7\i\8\m\i\h\b\u\9\w\z\v\z\z\n\4\p\r\w\9\1\e\m\8\2\q\i\i\d\j\b\h\0\y\y\v\5\q\z\7\t\s\i\c\t\p\a\b\5\g\q\0\f\1\6\7\g\4\k\q\k\c\r\a\q\2\a\5\q\c\o\v\1\u\a\9\w\a\o\t\x\f\7\5\d\c\p\r\u\h\i\a\7\n\i\n\t\5\p\3\w\s\o\i\5\c\7\8\h\9\y\7\2\z\0\r\9\4\w\1\n\v\w\f\y\y\1\a\w\o\q\1\x\6\i\s\0\j\w\5\w\7\w\i\1\b\t\y\c\i\5\c\3\b\a\g\d\e\g\8\3\3\6\1\y\i\z\7\j\t\d\9\h\q\v\w\g\g\k\y\8\h\3\e\o\h\1\5\2\o\j\v\u\1\4\l\o\f\g\a\k\9\a\m\7\d\d\k\3\e\o\b\g\9\w\a\9\c\r\n\j\a\h\0\2\t\n\8\e\n\2\n\e\a\o\r\u\w\f\c\c\d\0\l\w\e\b\q\l\f\z\t\f\x\p\2\n\8\s\5\h\5\r\y\e\h\j\h\d\r\1\7\o\5\k\w\0\j\r\k\l\z\h\m\s\j\y\3\n\c\z\h\s\u\f\t\q\j\s\n\5\x\h\t\i\l\4\1\s\4\h\1\i\6\2\1\s\t\h\g\c\5\o\2\f\z\v\g\f\a\b\u\a\v\b\p\5\e ]] 00:26:44.161 01:09:18 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:44.161 01:09:18 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:44.161 01:09:18 -- dd/common.sh@98 -- # xtrace_disable 00:26:44.161 01:09:18 -- common/autotest_common.sh@10 -- # set +x 00:26:44.161 01:09:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:44.161 01:09:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:44.420 [2024-11-18 01:09:18.571959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:44.420 [2024-11-18 01:09:18.572423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144607 ] 00:26:44.420 [2024-11-18 01:09:18.729508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.420 [2024-11-18 01:09:18.803833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.679  [2024-11-18T01:09:19.336Z] Copying: 512/512 [B] (average 500 kBps) 00:26:44.937 00:26:45.196 01:09:19 -- dd/posix.sh@93 -- # [[ ry5gkkxg6zay8aadtwyx9pveaenr8w3qclh1i7rf20p36o6jasmkixb9ivnou5xj5s7l8ngpaw65i83vpr1atz37123o2w8o5j96nlf7h5s99dodm74696f0888e3mgj6rmu671o089rot1u9kfgb15zcey8qshlob8mn7zb5mn308rb2zwwfhd9sa3s1y0hlbgwsdszihsubh4mmiirhcfak46fvq665s9jc0c04c60vshrwakmk4qzo3me640e63g9gs0ls7u4e5v62nvr85i6q0w8j47ytwn3xm44kt7yx594batyzzp1i1pjtzbenmwelzyt4v5z07sy0qn3sw400s9uk95qwz63twd5pdufpbho5tqwq4t5ozg6hcr4dhsxmywnvzabpnsjtitd5uwfo6kn14uptyas3waxckyjjdhs3rkucvdscda4cifub25kvt0qp8xidwtgw0truf5x0zdlr69oc2z1xatway69pq2hwywkbiz9rhue73fi == \r\y\5\g\k\k\x\g\6\z\a\y\8\a\a\d\t\w\y\x\9\p\v\e\a\e\n\r\8\w\3\q\c\l\h\1\i\7\r\f\2\0\p\3\6\o\6\j\a\s\m\k\i\x\b\9\i\v\n\o\u\5\x\j\5\s\7\l\8\n\g\p\a\w\6\5\i\8\3\v\p\r\1\a\t\z\3\7\1\2\3\o\2\w\8\o\5\j\9\6\n\l\f\7\h\5\s\9\9\d\o\d\m\7\4\6\9\6\f\0\8\8\8\e\3\m\g\j\6\r\m\u\6\7\1\o\0\8\9\r\o\t\1\u\9\k\f\g\b\1\5\z\c\e\y\8\q\s\h\l\o\b\8\m\n\7\z\b\5\m\n\3\0\8\r\b\2\z\w\w\f\h\d\9\s\a\3\s\1\y\0\h\l\b\g\w\s\d\s\z\i\h\s\u\b\h\4\m\m\i\i\r\h\c\f\a\k\4\6\f\v\q\6\6\5\s\9\j\c\0\c\0\4\c\6\0\v\s\h\r\w\a\k\m\k\4\q\z\o\3\m\e\6\4\0\e\6\3\g\9\g\s\0\l\s\7\u\4\e\5\v\6\2\n\v\r\8\5\i\6\q\0\w\8\j\4\7\y\t\w\n\3\x\m\4\4\k\t\7\y\x\5\9\4\b\a\t\y\z\z\p\1\i\1\p\j\t\z\b\e\n\m\w\e\l\z\y\t\4\v\5\z\0\7\s\y\0\q\n\3\s\w\4\0\0\s\9\u\k\9\5\q\w\z\6\3\t\w\d\5\p\d\u\f\p\b\h\o\5\t\q\w\q\4\t\5\o\z\g\6\h\c\r\4\d\h\s\x\m\y\w\n\v\z\a\b\p\n\s\j\t\i\t\d\5\u\w\f\o\6\k\n\1\4\u\p\t\y\a\s\3\w\a\x\c\k\y\j\j\d\h\s\3\r\k\u\c\v\d\s\c\d\a\4\c\i\f\u\b\2\5\k\v\t\0\q\p\8\x\i\d\w\t\g\w\0\t\r\u\f\5\x\0\z\d\l\r\6\9\o\c\2\z\1\x\a\t\w\a\y\6\9\p\q\2\h\w\y\w\k\b\i\z\9\r\h\u\e\7\3\f\i ]] 00:26:45.196 01:09:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:45.196 01:09:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:45.196 [2024-11-18 01:09:19.426694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:45.196 [2024-11-18 01:09:19.427277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144620 ] 00:26:45.196 [2024-11-18 01:09:19.585272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.455 [2024-11-18 01:09:19.658659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.455  [2024-11-18T01:09:20.422Z] Copying: 512/512 [B] (average 500 kBps) 00:26:46.023 00:26:46.023 01:09:20 -- dd/posix.sh@93 -- # [[ ry5gkkxg6zay8aadtwyx9pveaenr8w3qclh1i7rf20p36o6jasmkixb9ivnou5xj5s7l8ngpaw65i83vpr1atz37123o2w8o5j96nlf7h5s99dodm74696f0888e3mgj6rmu671o089rot1u9kfgb15zcey8qshlob8mn7zb5mn308rb2zwwfhd9sa3s1y0hlbgwsdszihsubh4mmiirhcfak46fvq665s9jc0c04c60vshrwakmk4qzo3me640e63g9gs0ls7u4e5v62nvr85i6q0w8j47ytwn3xm44kt7yx594batyzzp1i1pjtzbenmwelzyt4v5z07sy0qn3sw400s9uk95qwz63twd5pdufpbho5tqwq4t5ozg6hcr4dhsxmywnvzabpnsjtitd5uwfo6kn14uptyas3waxckyjjdhs3rkucvdscda4cifub25kvt0qp8xidwtgw0truf5x0zdlr69oc2z1xatway69pq2hwywkbiz9rhue73fi == \r\y\5\g\k\k\x\g\6\z\a\y\8\a\a\d\t\w\y\x\9\p\v\e\a\e\n\r\8\w\3\q\c\l\h\1\i\7\r\f\2\0\p\3\6\o\6\j\a\s\m\k\i\x\b\9\i\v\n\o\u\5\x\j\5\s\7\l\8\n\g\p\a\w\6\5\i\8\3\v\p\r\1\a\t\z\3\7\1\2\3\o\2\w\8\o\5\j\9\6\n\l\f\7\h\5\s\9\9\d\o\d\m\7\4\6\9\6\f\0\8\8\8\e\3\m\g\j\6\r\m\u\6\7\1\o\0\8\9\r\o\t\1\u\9\k\f\g\b\1\5\z\c\e\y\8\q\s\h\l\o\b\8\m\n\7\z\b\5\m\n\3\0\8\r\b\2\z\w\w\f\h\d\9\s\a\3\s\1\y\0\h\l\b\g\w\s\d\s\z\i\h\s\u\b\h\4\m\m\i\i\r\h\c\f\a\k\4\6\f\v\q\6\6\5\s\9\j\c\0\c\0\4\c\6\0\v\s\h\r\w\a\k\m\k\4\q\z\o\3\m\e\6\4\0\e\6\3\g\9\g\s\0\l\s\7\u\4\e\5\v\6\2\n\v\r\8\5\i\6\q\0\w\8\j\4\7\y\t\w\n\3\x\m\4\4\k\t\7\y\x\5\9\4\b\a\t\y\z\z\p\1\i\1\p\j\t\z\b\e\n\m\w\e\l\z\y\t\4\v\5\z\0\7\s\y\0\q\n\3\s\w\4\0\0\s\9\u\k\9\5\q\w\z\6\3\t\w\d\5\p\d\u\f\p\b\h\o\5\t\q\w\q\4\t\5\o\z\g\6\h\c\r\4\d\h\s\x\m\y\w\n\v\z\a\b\p\n\s\j\t\i\t\d\5\u\w\f\o\6\k\n\1\4\u\p\t\y\a\s\3\w\a\x\c\k\y\j\j\d\h\s\3\r\k\u\c\v\d\s\c\d\a\4\c\i\f\u\b\2\5\k\v\t\0\q\p\8\x\i\d\w\t\g\w\0\t\r\u\f\5\x\0\z\d\l\r\6\9\o\c\2\z\1\x\a\t\w\a\y\6\9\p\q\2\h\w\y\w\k\b\i\z\9\r\h\u\e\7\3\f\i ]] 00:26:46.023 01:09:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:46.023 01:09:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:46.023 [2024-11-18 01:09:20.281542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:46.023 [2024-11-18 01:09:20.282712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144630 ] 00:26:46.282 [2024-11-18 01:09:20.440389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.282 [2024-11-18 01:09:20.520109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.282  [2024-11-18T01:09:21.247Z] Copying: 512/512 [B] (average 125 kBps) 00:26:46.848 00:26:46.848 01:09:21 -- dd/posix.sh@93 -- # [[ ry5gkkxg6zay8aadtwyx9pveaenr8w3qclh1i7rf20p36o6jasmkixb9ivnou5xj5s7l8ngpaw65i83vpr1atz37123o2w8o5j96nlf7h5s99dodm74696f0888e3mgj6rmu671o089rot1u9kfgb15zcey8qshlob8mn7zb5mn308rb2zwwfhd9sa3s1y0hlbgwsdszihsubh4mmiirhcfak46fvq665s9jc0c04c60vshrwakmk4qzo3me640e63g9gs0ls7u4e5v62nvr85i6q0w8j47ytwn3xm44kt7yx594batyzzp1i1pjtzbenmwelzyt4v5z07sy0qn3sw400s9uk95qwz63twd5pdufpbho5tqwq4t5ozg6hcr4dhsxmywnvzabpnsjtitd5uwfo6kn14uptyas3waxckyjjdhs3rkucvdscda4cifub25kvt0qp8xidwtgw0truf5x0zdlr69oc2z1xatway69pq2hwywkbiz9rhue73fi == \r\y\5\g\k\k\x\g\6\z\a\y\8\a\a\d\t\w\y\x\9\p\v\e\a\e\n\r\8\w\3\q\c\l\h\1\i\7\r\f\2\0\p\3\6\o\6\j\a\s\m\k\i\x\b\9\i\v\n\o\u\5\x\j\5\s\7\l\8\n\g\p\a\w\6\5\i\8\3\v\p\r\1\a\t\z\3\7\1\2\3\o\2\w\8\o\5\j\9\6\n\l\f\7\h\5\s\9\9\d\o\d\m\7\4\6\9\6\f\0\8\8\8\e\3\m\g\j\6\r\m\u\6\7\1\o\0\8\9\r\o\t\1\u\9\k\f\g\b\1\5\z\c\e\y\8\q\s\h\l\o\b\8\m\n\7\z\b\5\m\n\3\0\8\r\b\2\z\w\w\f\h\d\9\s\a\3\s\1\y\0\h\l\b\g\w\s\d\s\z\i\h\s\u\b\h\4\m\m\i\i\r\h\c\f\a\k\4\6\f\v\q\6\6\5\s\9\j\c\0\c\0\4\c\6\0\v\s\h\r\w\a\k\m\k\4\q\z\o\3\m\e\6\4\0\e\6\3\g\9\g\s\0\l\s\7\u\4\e\5\v\6\2\n\v\r\8\5\i\6\q\0\w\8\j\4\7\y\t\w\n\3\x\m\4\4\k\t\7\y\x\5\9\4\b\a\t\y\z\z\p\1\i\1\p\j\t\z\b\e\n\m\w\e\l\z\y\t\4\v\5\z\0\7\s\y\0\q\n\3\s\w\4\0\0\s\9\u\k\9\5\q\w\z\6\3\t\w\d\5\p\d\u\f\p\b\h\o\5\t\q\w\q\4\t\5\o\z\g\6\h\c\r\4\d\h\s\x\m\y\w\n\v\z\a\b\p\n\s\j\t\i\t\d\5\u\w\f\o\6\k\n\1\4\u\p\t\y\a\s\3\w\a\x\c\k\y\j\j\d\h\s\3\r\k\u\c\v\d\s\c\d\a\4\c\i\f\u\b\2\5\k\v\t\0\q\p\8\x\i\d\w\t\g\w\0\t\r\u\f\5\x\0\z\d\l\r\6\9\o\c\2\z\1\x\a\t\w\a\y\6\9\p\q\2\h\w\y\w\k\b\i\z\9\r\h\u\e\7\3\f\i ]] 00:26:46.848 01:09:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:46.848 01:09:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:46.848 [2024-11-18 01:09:21.160602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:46.848 [2024-11-18 01:09:21.161086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144646 ] 00:26:47.106 [2024-11-18 01:09:21.317103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.106 [2024-11-18 01:09:21.400201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.365  [2024-11-18T01:09:22.021Z] Copying: 512/512 [B] (average 100 kBps) 00:26:47.622 00:26:47.622 ************************************ 00:26:47.622 END TEST dd_flags_misc 00:26:47.622 ************************************ 00:26:47.622 01:09:21 -- dd/posix.sh@93 -- # [[ ry5gkkxg6zay8aadtwyx9pveaenr8w3qclh1i7rf20p36o6jasmkixb9ivnou5xj5s7l8ngpaw65i83vpr1atz37123o2w8o5j96nlf7h5s99dodm74696f0888e3mgj6rmu671o089rot1u9kfgb15zcey8qshlob8mn7zb5mn308rb2zwwfhd9sa3s1y0hlbgwsdszihsubh4mmiirhcfak46fvq665s9jc0c04c60vshrwakmk4qzo3me640e63g9gs0ls7u4e5v62nvr85i6q0w8j47ytwn3xm44kt7yx594batyzzp1i1pjtzbenmwelzyt4v5z07sy0qn3sw400s9uk95qwz63twd5pdufpbho5tqwq4t5ozg6hcr4dhsxmywnvzabpnsjtitd5uwfo6kn14uptyas3waxckyjjdhs3rkucvdscda4cifub25kvt0qp8xidwtgw0truf5x0zdlr69oc2z1xatway69pq2hwywkbiz9rhue73fi == \r\y\5\g\k\k\x\g\6\z\a\y\8\a\a\d\t\w\y\x\9\p\v\e\a\e\n\r\8\w\3\q\c\l\h\1\i\7\r\f\2\0\p\3\6\o\6\j\a\s\m\k\i\x\b\9\i\v\n\o\u\5\x\j\5\s\7\l\8\n\g\p\a\w\6\5\i\8\3\v\p\r\1\a\t\z\3\7\1\2\3\o\2\w\8\o\5\j\9\6\n\l\f\7\h\5\s\9\9\d\o\d\m\7\4\6\9\6\f\0\8\8\8\e\3\m\g\j\6\r\m\u\6\7\1\o\0\8\9\r\o\t\1\u\9\k\f\g\b\1\5\z\c\e\y\8\q\s\h\l\o\b\8\m\n\7\z\b\5\m\n\3\0\8\r\b\2\z\w\w\f\h\d\9\s\a\3\s\1\y\0\h\l\b\g\w\s\d\s\z\i\h\s\u\b\h\4\m\m\i\i\r\h\c\f\a\k\4\6\f\v\q\6\6\5\s\9\j\c\0\c\0\4\c\6\0\v\s\h\r\w\a\k\m\k\4\q\z\o\3\m\e\6\4\0\e\6\3\g\9\g\s\0\l\s\7\u\4\e\5\v\6\2\n\v\r\8\5\i\6\q\0\w\8\j\4\7\y\t\w\n\3\x\m\4\4\k\t\7\y\x\5\9\4\b\a\t\y\z\z\p\1\i\1\p\j\t\z\b\e\n\m\w\e\l\z\y\t\4\v\5\z\0\7\s\y\0\q\n\3\s\w\4\0\0\s\9\u\k\9\5\q\w\z\6\3\t\w\d\5\p\d\u\f\p\b\h\o\5\t\q\w\q\4\t\5\o\z\g\6\h\c\r\4\d\h\s\x\m\y\w\n\v\z\a\b\p\n\s\j\t\i\t\d\5\u\w\f\o\6\k\n\1\4\u\p\t\y\a\s\3\w\a\x\c\k\y\j\j\d\h\s\3\r\k\u\c\v\d\s\c\d\a\4\c\i\f\u\b\2\5\k\v\t\0\q\p\8\x\i\d\w\t\g\w\0\t\r\u\f\5\x\0\z\d\l\r\6\9\o\c\2\z\1\x\a\t\w\a\y\6\9\p\q\2\h\w\y\w\k\b\i\z\9\r\h\u\e\7\3\f\i ]] 00:26:47.622 00:26:47.622 real 0m6.865s 00:26:47.622 user 0m3.538s 00:26:47.622 sys 0m2.211s 00:26:47.622 01:09:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:47.622 01:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 01:09:22 -- dd/posix.sh@131 -- # tests_forced_aio 00:26:47.881 01:09:22 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:26:47.881 * Second test run, using AIO 00:26:47.881 01:09:22 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:26:47.881 01:09:22 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:26:47.881 01:09:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:47.881 01:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:47.881 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 ************************************ 00:26:47.881 START TEST dd_flag_append_forced_aio 00:26:47.881 ************************************ 00:26:47.881 01:09:22 -- common/autotest_common.sh@1114 -- # append 00:26:47.881 01:09:22 -- dd/posix.sh@16 -- # local dump0 00:26:47.881 01:09:22 -- dd/posix.sh@17 -- # local dump1 00:26:47.881 01:09:22 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:47.881 01:09:22 -- dd/common.sh@98 -- # xtrace_disable 00:26:47.881 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 01:09:22 -- dd/posix.sh@19 -- # dump0=ltoc2zf318djpopzomlcs1xgd9h4u4pi 00:26:47.881 01:09:22 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:47.881 01:09:22 -- dd/common.sh@98 -- # xtrace_disable 00:26:47.881 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 01:09:22 -- dd/posix.sh@20 -- # dump1=jf0qw1wbblx8du1o5swp3k3o0585na2s 00:26:47.881 01:09:22 -- dd/posix.sh@22 -- # printf %s ltoc2zf318djpopzomlcs1xgd9h4u4pi 00:26:47.881 01:09:22 -- dd/posix.sh@23 -- # printf %s jf0qw1wbblx8du1o5swp3k3o0585na2s 00:26:47.881 01:09:22 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:47.881 [2024-11-18 01:09:22.121880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:47.881 [2024-11-18 01:09:22.122670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144680 ] 00:26:47.881 [2024-11-18 01:09:22.280446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.140 [2024-11-18 01:09:22.366863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.140  [2024-11-18T01:09:23.107Z] Copying: 32/32 [B] (average 31 kBps) 00:26:48.708 00:26:48.708 01:09:22 -- dd/posix.sh@27 -- # [[ jf0qw1wbblx8du1o5swp3k3o0585na2sltoc2zf318djpopzomlcs1xgd9h4u4pi == \j\f\0\q\w\1\w\b\b\l\x\8\d\u\1\o\5\s\w\p\3\k\3\o\0\5\8\5\n\a\2\s\l\t\o\c\2\z\f\3\1\8\d\j\p\o\p\z\o\m\l\c\s\1\x\g\d\9\h\4\u\4\p\i ]] 00:26:48.708 00:26:48.708 real 0m0.866s 00:26:48.708 user 0m0.457s 00:26:48.708 sys 0m0.276s 00:26:48.708 ************************************ 00:26:48.708 END TEST dd_flag_append_forced_aio 00:26:48.708 ************************************ 00:26:48.708 01:09:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:48.708 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:48.708 01:09:22 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:26:48.708 01:09:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:48.708 01:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:48.708 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:48.708 ************************************ 00:26:48.708 START TEST dd_flag_directory_forced_aio 00:26:48.708 ************************************ 00:26:48.708 01:09:22 -- common/autotest_common.sh@1114 -- # directory 00:26:48.708 01:09:22 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:48.708 01:09:22 -- common/autotest_common.sh@650 -- # local es=0 00:26:48.708 01:09:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:48.708 01:09:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:48.708 01:09:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:48.708 01:09:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:48.708 01:09:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:48.708 01:09:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:48.708 01:09:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:48.708 01:09:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:48.708 01:09:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:48.708 01:09:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:48.708 [2024-11-18 01:09:23.053126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:48.708 [2024-11-18 01:09:23.053969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144721 ] 00:26:48.968 [2024-11-18 01:09:23.208501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.968 [2024-11-18 01:09:23.275538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.226 [2024-11-18 01:09:23.391199] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:49.226 [2024-11-18 01:09:23.391295] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:49.226 [2024-11-18 01:09:23.391346] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:49.226 [2024-11-18 01:09:23.572676] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:49.485 01:09:23 -- common/autotest_common.sh@653 -- # es=236 00:26:49.485 01:09:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:49.485 01:09:23 -- common/autotest_common.sh@662 -- # es=108 00:26:49.485 01:09:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:49.485 01:09:23 -- common/autotest_common.sh@670 -- # es=1 00:26:49.485 01:09:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:49.485 01:09:23 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:49.485 01:09:23 -- common/autotest_common.sh@650 -- # local es=0 00:26:49.485 01:09:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:49.485 01:09:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:49.485 01:09:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:49.485 01:09:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:49.485 01:09:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:49.485 01:09:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:49.485 01:09:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:49.485 01:09:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:49.485 01:09:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:49.485 01:09:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:49.485 [2024-11-18 01:09:23.852439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:49.485 [2024-11-18 01:09:23.852726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144741 ] 00:26:49.744 [2024-11-18 01:09:24.008203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.744 [2024-11-18 01:09:24.074349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.004 [2024-11-18 01:09:24.189508] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:50.004 [2024-11-18 01:09:24.189625] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:50.004 [2024-11-18 01:09:24.189670] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:50.004 [2024-11-18 01:09:24.370466] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:50.263 01:09:24 -- common/autotest_common.sh@653 -- # es=236 00:26:50.263 01:09:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.263 ************************************ 00:26:50.263 END TEST dd_flag_directory_forced_aio 00:26:50.263 ************************************ 00:26:50.263 01:09:24 -- common/autotest_common.sh@662 -- # es=108 00:26:50.263 01:09:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:50.263 01:09:24 -- common/autotest_common.sh@670 -- # es=1 00:26:50.263 01:09:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.263 00:26:50.263 real 0m1.601s 00:26:50.263 user 0m0.869s 00:26:50.263 sys 0m0.532s 00:26:50.263 01:09:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:50.263 01:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:50.263 01:09:24 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:26:50.263 01:09:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:50.263 01:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:50.263 01:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:50.263 ************************************ 00:26:50.263 START TEST dd_flag_nofollow_forced_aio 00:26:50.263 ************************************ 00:26:50.263 01:09:24 -- common/autotest_common.sh@1114 -- # nofollow 00:26:50.263 01:09:24 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:50.263 01:09:24 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:50.263 01:09:24 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:50.263 01:09:24 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:50.263 01:09:24 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:50.263 01:09:24 -- common/autotest_common.sh@650 -- # local es=0 00:26:50.263 01:09:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:50.263 01:09:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:50.263 01:09:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.263 01:09:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:50.263 01:09:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.263 01:09:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:50.263 01:09:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.263 01:09:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:50.263 01:09:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:50.263 01:09:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:50.522 [2024-11-18 01:09:24.732043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:50.522 [2024-11-18 01:09:24.732883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144775 ] 00:26:50.522 [2024-11-18 01:09:24.889233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.780 [2024-11-18 01:09:24.971728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.780 [2024-11-18 01:09:25.091409] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:50.780 [2024-11-18 01:09:25.091516] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:50.780 [2024-11-18 01:09:25.091561] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:51.039 [2024-11-18 01:09:25.278257] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:51.298 01:09:25 -- common/autotest_common.sh@653 -- # es=216 00:26:51.298 01:09:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:51.298 01:09:25 -- common/autotest_common.sh@662 -- # es=88 00:26:51.298 01:09:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:51.298 01:09:25 -- common/autotest_common.sh@670 -- # es=1 00:26:51.298 01:09:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:51.298 01:09:25 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:51.298 01:09:25 -- common/autotest_common.sh@650 -- # local es=0 00:26:51.298 01:09:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:51.298 01:09:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:51.298 01:09:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:51.298 01:09:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:51.298 01:09:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:51.298 01:09:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:51.298 01:09:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:51.298 01:09:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:51.298 01:09:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:51.298 01:09:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:51.298 [2024-11-18 01:09:25.555087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:51.298 [2024-11-18 01:09:25.555368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144796 ] 00:26:51.557 [2024-11-18 01:09:25.712234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.557 [2024-11-18 01:09:25.789424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.557 [2024-11-18 01:09:25.908802] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:51.558 [2024-11-18 01:09:25.908921] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:51.558 [2024-11-18 01:09:25.908968] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:51.816 [2024-11-18 01:09:26.094463] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:52.076 01:09:26 -- common/autotest_common.sh@653 -- # es=216 00:26:52.076 01:09:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:52.076 01:09:26 -- common/autotest_common.sh@662 -- # es=88 00:26:52.076 01:09:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:52.076 01:09:26 -- common/autotest_common.sh@670 -- # es=1 00:26:52.076 01:09:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:52.076 01:09:26 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:52.076 01:09:26 -- dd/common.sh@98 -- # xtrace_disable 00:26:52.076 01:09:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.076 01:09:26 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:52.076 [2024-11-18 01:09:26.365383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:52.076 [2024-11-18 01:09:26.365641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144803 ] 00:26:52.335 [2024-11-18 01:09:26.521123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.335 [2024-11-18 01:09:26.600781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.335  [2024-11-18T01:09:27.303Z] Copying: 512/512 [B] (average 500 kBps) 00:26:52.904 00:26:52.904 01:09:27 -- dd/posix.sh@49 -- # [[ c31lza3hn3rpmu15wmk6zsi3apyy7o07lbdqilmqmzqyebvonj1aj8l7hauertskv2nsy3onvvfv3qm2qzamib7wnda8ljvs8uqgc5tkdjwmluf25cq3ajntb7ropo7uoffspnmm93du9rlum2meyvzq8w567ri4yidp692045s9dfaruokn2mi4q1jft8eb8fxm8r3hwetgjzwcfxs4x0ezsr3muhidvdii9if6exotl3biqkciqujneoffkudtkoz387sco4euvqrolnp1bkzo1b8t0uy78u5m3c21kdc9igli9v6ith0gd3cyj5scj6oqy63w3315j8g7250ut9csa6d4ct4yzz959ogqy08r7wexg0k8asrbkjvlx5fqa9o1dmfobtylvdnvkef620radwl3jponw1asl4fsb5i2g1vjjcmrqcamsevwmvultma7cfijj2zpzckyg6pxtlmum1dsc1q1oz6j5gmwx4jwuj94brboom1vwfnqg6i7 == \c\3\1\l\z\a\3\h\n\3\r\p\m\u\1\5\w\m\k\6\z\s\i\3\a\p\y\y\7\o\0\7\l\b\d\q\i\l\m\q\m\z\q\y\e\b\v\o\n\j\1\a\j\8\l\7\h\a\u\e\r\t\s\k\v\2\n\s\y\3\o\n\v\v\f\v\3\q\m\2\q\z\a\m\i\b\7\w\n\d\a\8\l\j\v\s\8\u\q\g\c\5\t\k\d\j\w\m\l\u\f\2\5\c\q\3\a\j\n\t\b\7\r\o\p\o\7\u\o\f\f\s\p\n\m\m\9\3\d\u\9\r\l\u\m\2\m\e\y\v\z\q\8\w\5\6\7\r\i\4\y\i\d\p\6\9\2\0\4\5\s\9\d\f\a\r\u\o\k\n\2\m\i\4\q\1\j\f\t\8\e\b\8\f\x\m\8\r\3\h\w\e\t\g\j\z\w\c\f\x\s\4\x\0\e\z\s\r\3\m\u\h\i\d\v\d\i\i\9\i\f\6\e\x\o\t\l\3\b\i\q\k\c\i\q\u\j\n\e\o\f\f\k\u\d\t\k\o\z\3\8\7\s\c\o\4\e\u\v\q\r\o\l\n\p\1\b\k\z\o\1\b\8\t\0\u\y\7\8\u\5\m\3\c\2\1\k\d\c\9\i\g\l\i\9\v\6\i\t\h\0\g\d\3\c\y\j\5\s\c\j\6\o\q\y\6\3\w\3\3\1\5\j\8\g\7\2\5\0\u\t\9\c\s\a\6\d\4\c\t\4\y\z\z\9\5\9\o\g\q\y\0\8\r\7\w\e\x\g\0\k\8\a\s\r\b\k\j\v\l\x\5\f\q\a\9\o\1\d\m\f\o\b\t\y\l\v\d\n\v\k\e\f\6\2\0\r\a\d\w\l\3\j\p\o\n\w\1\a\s\l\4\f\s\b\5\i\2\g\1\v\j\j\c\m\r\q\c\a\m\s\e\v\w\m\v\u\l\t\m\a\7\c\f\i\j\j\2\z\p\z\c\k\y\g\6\p\x\t\l\m\u\m\1\d\s\c\1\q\1\o\z\6\j\5\g\m\w\x\4\j\w\u\j\9\4\b\r\b\o\o\m\1\v\w\f\n\q\g\6\i\7 ]] 00:26:52.904 00:26:52.904 real 0m2.508s 00:26:52.904 user 0m1.308s 00:26:52.904 sys 0m0.863s 00:26:52.904 01:09:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:52.904 ************************************ 00:26:52.904 END TEST dd_flag_nofollow_forced_aio 00:26:52.904 ************************************ 00:26:52.904 01:09:27 -- common/autotest_common.sh@10 -- # set +x 00:26:52.904 01:09:27 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:26:52.904 01:09:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:52.904 01:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:52.904 01:09:27 -- common/autotest_common.sh@10 -- # set +x 00:26:52.904 ************************************ 00:26:52.904 START TEST dd_flag_noatime_forced_aio 00:26:52.904 ************************************ 00:26:52.904 01:09:27 -- common/autotest_common.sh@1114 -- # noatime 00:26:52.904 01:09:27 -- dd/posix.sh@53 -- # local atime_if 00:26:52.904 01:09:27 -- dd/posix.sh@54 -- # local atime_of 00:26:52.904 01:09:27 -- dd/posix.sh@58 -- # gen_bytes 512 00:26:52.904 01:09:27 -- dd/common.sh@98 -- # xtrace_disable 00:26:52.904 01:09:27 -- common/autotest_common.sh@10 -- # set +x 00:26:52.904 01:09:27 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:52.904 01:09:27 -- dd/posix.sh@60 -- # atime_if=1731892166 00:26:52.904 01:09:27 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:52.904 01:09:27 -- dd/posix.sh@61 -- # atime_of=1731892167 00:26:52.904 01:09:27 -- dd/posix.sh@66 -- # sleep 1 00:26:54.282 01:09:28 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:54.282 [2024-11-18 01:09:28.317822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:54.282 [2024-11-18 01:09:28.318104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144858 ] 00:26:54.282 [2024-11-18 01:09:28.476303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.282 [2024-11-18 01:09:28.572204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.541  [2024-11-18T01:09:29.199Z] Copying: 512/512 [B] (average 500 kBps) 00:26:54.800 00:26:54.800 01:09:29 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:54.800 01:09:29 -- dd/posix.sh@69 -- # (( atime_if == 1731892166 )) 00:26:54.800 01:09:29 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:54.800 01:09:29 -- dd/posix.sh@70 -- # (( atime_of == 1731892167 )) 00:26:54.800 01:09:29 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:55.058 [2024-11-18 01:09:29.215932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:55.058 [2024-11-18 01:09:29.216206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144877 ] 00:26:55.058 [2024-11-18 01:09:29.372778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.058 [2024-11-18 01:09:29.448963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.316  [2024-11-18T01:09:29.974Z] Copying: 512/512 [B] (average 500 kBps) 00:26:55.575 00:26:55.834 01:09:29 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:55.834 01:09:29 -- dd/posix.sh@73 -- # (( atime_if < 1731892169 )) 00:26:55.834 00:26:55.834 real 0m2.775s 00:26:55.834 user 0m0.963s 00:26:55.834 sys 0m0.552s 00:26:55.834 01:09:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:55.834 ************************************ 00:26:55.834 END TEST dd_flag_noatime_forced_aio 00:26:55.834 ************************************ 00:26:55.834 01:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:55.834 01:09:30 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:26:55.834 01:09:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:55.834 01:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:55.834 01:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:55.834 ************************************ 00:26:55.834 START TEST dd_flags_misc_forced_aio 00:26:55.834 ************************************ 00:26:55.834 01:09:30 -- common/autotest_common.sh@1114 -- # io 00:26:55.834 01:09:30 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:55.834 01:09:30 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:55.834 01:09:30 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:55.834 01:09:30 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:55.834 01:09:30 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:55.834 01:09:30 -- dd/common.sh@98 -- # xtrace_disable 00:26:55.834 01:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:55.834 01:09:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:55.834 01:09:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:55.834 [2024-11-18 01:09:30.130674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:55.834 [2024-11-18 01:09:30.130945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144915 ] 00:26:56.093 [2024-11-18 01:09:30.287244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.093 [2024-11-18 01:09:30.359483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.093  [2024-11-18T01:09:31.060Z] Copying: 512/512 [B] (average 500 kBps) 00:26:56.661 00:26:56.661 01:09:30 -- dd/posix.sh@93 -- # [[ 2lku6nk5x98rbv9gb5i22w0s2tbbinsjdj9ez7qufocka7pja5ejggmidf5dfrstdku0p0wa32kp6s6raebawovy1vosmhxcqpscpgksug0zcbhqb8xniozodo30jurh6jwdr5n6u9ojinny95gwd24fm2uip7ruejyybvxeqjxk984n7yiok24mapd200zagcln9gnmgty11j958nrzs1w9j5hykctaqycjwz395tmpq6lttwv3pjhkitrdfcqv2so5dh8gbcjfyxht28hzdls1jjctv4oxezi26pheflkht269uk14m8b3pjx1s4roaf098zekogwe346nd21t0a6kdbas00eosez7qyhiofycbfmghevviic2i4p9estvrd56sx0cokvfi6a5abpf7nf2iy0n5gx49bmovf0en84kn38wjnr60s28nqav03vwlv0kb9zrcehe3fo0rximalq9ovn4rask5h2vni2g12q113nefmqwttsf6z1sx4il == \2\l\k\u\6\n\k\5\x\9\8\r\b\v\9\g\b\5\i\2\2\w\0\s\2\t\b\b\i\n\s\j\d\j\9\e\z\7\q\u\f\o\c\k\a\7\p\j\a\5\e\j\g\g\m\i\d\f\5\d\f\r\s\t\d\k\u\0\p\0\w\a\3\2\k\p\6\s\6\r\a\e\b\a\w\o\v\y\1\v\o\s\m\h\x\c\q\p\s\c\p\g\k\s\u\g\0\z\c\b\h\q\b\8\x\n\i\o\z\o\d\o\3\0\j\u\r\h\6\j\w\d\r\5\n\6\u\9\o\j\i\n\n\y\9\5\g\w\d\2\4\f\m\2\u\i\p\7\r\u\e\j\y\y\b\v\x\e\q\j\x\k\9\8\4\n\7\y\i\o\k\2\4\m\a\p\d\2\0\0\z\a\g\c\l\n\9\g\n\m\g\t\y\1\1\j\9\5\8\n\r\z\s\1\w\9\j\5\h\y\k\c\t\a\q\y\c\j\w\z\3\9\5\t\m\p\q\6\l\t\t\w\v\3\p\j\h\k\i\t\r\d\f\c\q\v\2\s\o\5\d\h\8\g\b\c\j\f\y\x\h\t\2\8\h\z\d\l\s\1\j\j\c\t\v\4\o\x\e\z\i\2\6\p\h\e\f\l\k\h\t\2\6\9\u\k\1\4\m\8\b\3\p\j\x\1\s\4\r\o\a\f\0\9\8\z\e\k\o\g\w\e\3\4\6\n\d\2\1\t\0\a\6\k\d\b\a\s\0\0\e\o\s\e\z\7\q\y\h\i\o\f\y\c\b\f\m\g\h\e\v\v\i\i\c\2\i\4\p\9\e\s\t\v\r\d\5\6\s\x\0\c\o\k\v\f\i\6\a\5\a\b\p\f\7\n\f\2\i\y\0\n\5\g\x\4\9\b\m\o\v\f\0\e\n\8\4\k\n\3\8\w\j\n\r\6\0\s\2\8\n\q\a\v\0\3\v\w\l\v\0\k\b\9\z\r\c\e\h\e\3\f\o\0\r\x\i\m\a\l\q\9\o\v\n\4\r\a\s\k\5\h\2\v\n\i\2\g\1\2\q\1\1\3\n\e\f\m\q\w\t\t\s\f\6\z\1\s\x\4\i\l ]] 00:26:56.661 01:09:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:56.661 01:09:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:56.661 [2024-11-18 01:09:30.980567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:56.661 [2024-11-18 01:09:30.980835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144930 ] 00:26:56.921 [2024-11-18 01:09:31.136782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.921 [2024-11-18 01:09:31.210790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.180  [2024-11-18T01:09:31.839Z] Copying: 512/512 [B] (average 500 kBps) 00:26:57.440 00:26:57.440 01:09:31 -- dd/posix.sh@93 -- # [[ 2lku6nk5x98rbv9gb5i22w0s2tbbinsjdj9ez7qufocka7pja5ejggmidf5dfrstdku0p0wa32kp6s6raebawovy1vosmhxcqpscpgksug0zcbhqb8xniozodo30jurh6jwdr5n6u9ojinny95gwd24fm2uip7ruejyybvxeqjxk984n7yiok24mapd200zagcln9gnmgty11j958nrzs1w9j5hykctaqycjwz395tmpq6lttwv3pjhkitrdfcqv2so5dh8gbcjfyxht28hzdls1jjctv4oxezi26pheflkht269uk14m8b3pjx1s4roaf098zekogwe346nd21t0a6kdbas00eosez7qyhiofycbfmghevviic2i4p9estvrd56sx0cokvfi6a5abpf7nf2iy0n5gx49bmovf0en84kn38wjnr60s28nqav03vwlv0kb9zrcehe3fo0rximalq9ovn4rask5h2vni2g12q113nefmqwttsf6z1sx4il == \2\l\k\u\6\n\k\5\x\9\8\r\b\v\9\g\b\5\i\2\2\w\0\s\2\t\b\b\i\n\s\j\d\j\9\e\z\7\q\u\f\o\c\k\a\7\p\j\a\5\e\j\g\g\m\i\d\f\5\d\f\r\s\t\d\k\u\0\p\0\w\a\3\2\k\p\6\s\6\r\a\e\b\a\w\o\v\y\1\v\o\s\m\h\x\c\q\p\s\c\p\g\k\s\u\g\0\z\c\b\h\q\b\8\x\n\i\o\z\o\d\o\3\0\j\u\r\h\6\j\w\d\r\5\n\6\u\9\o\j\i\n\n\y\9\5\g\w\d\2\4\f\m\2\u\i\p\7\r\u\e\j\y\y\b\v\x\e\q\j\x\k\9\8\4\n\7\y\i\o\k\2\4\m\a\p\d\2\0\0\z\a\g\c\l\n\9\g\n\m\g\t\y\1\1\j\9\5\8\n\r\z\s\1\w\9\j\5\h\y\k\c\t\a\q\y\c\j\w\z\3\9\5\t\m\p\q\6\l\t\t\w\v\3\p\j\h\k\i\t\r\d\f\c\q\v\2\s\o\5\d\h\8\g\b\c\j\f\y\x\h\t\2\8\h\z\d\l\s\1\j\j\c\t\v\4\o\x\e\z\i\2\6\p\h\e\f\l\k\h\t\2\6\9\u\k\1\4\m\8\b\3\p\j\x\1\s\4\r\o\a\f\0\9\8\z\e\k\o\g\w\e\3\4\6\n\d\2\1\t\0\a\6\k\d\b\a\s\0\0\e\o\s\e\z\7\q\y\h\i\o\f\y\c\b\f\m\g\h\e\v\v\i\i\c\2\i\4\p\9\e\s\t\v\r\d\5\6\s\x\0\c\o\k\v\f\i\6\a\5\a\b\p\f\7\n\f\2\i\y\0\n\5\g\x\4\9\b\m\o\v\f\0\e\n\8\4\k\n\3\8\w\j\n\r\6\0\s\2\8\n\q\a\v\0\3\v\w\l\v\0\k\b\9\z\r\c\e\h\e\3\f\o\0\r\x\i\m\a\l\q\9\o\v\n\4\r\a\s\k\5\h\2\v\n\i\2\g\1\2\q\1\1\3\n\e\f\m\q\w\t\t\s\f\6\z\1\s\x\4\i\l ]] 00:26:57.440 01:09:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:57.440 01:09:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:57.440 [2024-11-18 01:09:31.828269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:57.440 [2024-11-18 01:09:31.828555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144947 ] 00:26:57.700 [2024-11-18 01:09:31.983897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.700 [2024-11-18 01:09:32.055698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.959  [2024-11-18T01:09:32.618Z] Copying: 512/512 [B] (average 166 kBps) 00:26:58.219 00:26:58.219 01:09:32 -- dd/posix.sh@93 -- # [[ 2lku6nk5x98rbv9gb5i22w0s2tbbinsjdj9ez7qufocka7pja5ejggmidf5dfrstdku0p0wa32kp6s6raebawovy1vosmhxcqpscpgksug0zcbhqb8xniozodo30jurh6jwdr5n6u9ojinny95gwd24fm2uip7ruejyybvxeqjxk984n7yiok24mapd200zagcln9gnmgty11j958nrzs1w9j5hykctaqycjwz395tmpq6lttwv3pjhkitrdfcqv2so5dh8gbcjfyxht28hzdls1jjctv4oxezi26pheflkht269uk14m8b3pjx1s4roaf098zekogwe346nd21t0a6kdbas00eosez7qyhiofycbfmghevviic2i4p9estvrd56sx0cokvfi6a5abpf7nf2iy0n5gx49bmovf0en84kn38wjnr60s28nqav03vwlv0kb9zrcehe3fo0rximalq9ovn4rask5h2vni2g12q113nefmqwttsf6z1sx4il == \2\l\k\u\6\n\k\5\x\9\8\r\b\v\9\g\b\5\i\2\2\w\0\s\2\t\b\b\i\n\s\j\d\j\9\e\z\7\q\u\f\o\c\k\a\7\p\j\a\5\e\j\g\g\m\i\d\f\5\d\f\r\s\t\d\k\u\0\p\0\w\a\3\2\k\p\6\s\6\r\a\e\b\a\w\o\v\y\1\v\o\s\m\h\x\c\q\p\s\c\p\g\k\s\u\g\0\z\c\b\h\q\b\8\x\n\i\o\z\o\d\o\3\0\j\u\r\h\6\j\w\d\r\5\n\6\u\9\o\j\i\n\n\y\9\5\g\w\d\2\4\f\m\2\u\i\p\7\r\u\e\j\y\y\b\v\x\e\q\j\x\k\9\8\4\n\7\y\i\o\k\2\4\m\a\p\d\2\0\0\z\a\g\c\l\n\9\g\n\m\g\t\y\1\1\j\9\5\8\n\r\z\s\1\w\9\j\5\h\y\k\c\t\a\q\y\c\j\w\z\3\9\5\t\m\p\q\6\l\t\t\w\v\3\p\j\h\k\i\t\r\d\f\c\q\v\2\s\o\5\d\h\8\g\b\c\j\f\y\x\h\t\2\8\h\z\d\l\s\1\j\j\c\t\v\4\o\x\e\z\i\2\6\p\h\e\f\l\k\h\t\2\6\9\u\k\1\4\m\8\b\3\p\j\x\1\s\4\r\o\a\f\0\9\8\z\e\k\o\g\w\e\3\4\6\n\d\2\1\t\0\a\6\k\d\b\a\s\0\0\e\o\s\e\z\7\q\y\h\i\o\f\y\c\b\f\m\g\h\e\v\v\i\i\c\2\i\4\p\9\e\s\t\v\r\d\5\6\s\x\0\c\o\k\v\f\i\6\a\5\a\b\p\f\7\n\f\2\i\y\0\n\5\g\x\4\9\b\m\o\v\f\0\e\n\8\4\k\n\3\8\w\j\n\r\6\0\s\2\8\n\q\a\v\0\3\v\w\l\v\0\k\b\9\z\r\c\e\h\e\3\f\o\0\r\x\i\m\a\l\q\9\o\v\n\4\r\a\s\k\5\h\2\v\n\i\2\g\1\2\q\1\1\3\n\e\f\m\q\w\t\t\s\f\6\z\1\s\x\4\i\l ]] 00:26:58.219 01:09:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:58.219 01:09:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:58.479 [2024-11-18 01:09:32.671427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:58.479 [2024-11-18 01:09:32.671790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144956 ] 00:26:58.479 [2024-11-18 01:09:32.828963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.738 [2024-11-18 01:09:32.904747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.738  [2024-11-18T01:09:33.705Z] Copying: 512/512 [B] (average 100 kBps) 00:26:59.306 00:26:59.307 01:09:33 -- dd/posix.sh@93 -- # [[ 2lku6nk5x98rbv9gb5i22w0s2tbbinsjdj9ez7qufocka7pja5ejggmidf5dfrstdku0p0wa32kp6s6raebawovy1vosmhxcqpscpgksug0zcbhqb8xniozodo30jurh6jwdr5n6u9ojinny95gwd24fm2uip7ruejyybvxeqjxk984n7yiok24mapd200zagcln9gnmgty11j958nrzs1w9j5hykctaqycjwz395tmpq6lttwv3pjhkitrdfcqv2so5dh8gbcjfyxht28hzdls1jjctv4oxezi26pheflkht269uk14m8b3pjx1s4roaf098zekogwe346nd21t0a6kdbas00eosez7qyhiofycbfmghevviic2i4p9estvrd56sx0cokvfi6a5abpf7nf2iy0n5gx49bmovf0en84kn38wjnr60s28nqav03vwlv0kb9zrcehe3fo0rximalq9ovn4rask5h2vni2g12q113nefmqwttsf6z1sx4il == \2\l\k\u\6\n\k\5\x\9\8\r\b\v\9\g\b\5\i\2\2\w\0\s\2\t\b\b\i\n\s\j\d\j\9\e\z\7\q\u\f\o\c\k\a\7\p\j\a\5\e\j\g\g\m\i\d\f\5\d\f\r\s\t\d\k\u\0\p\0\w\a\3\2\k\p\6\s\6\r\a\e\b\a\w\o\v\y\1\v\o\s\m\h\x\c\q\p\s\c\p\g\k\s\u\g\0\z\c\b\h\q\b\8\x\n\i\o\z\o\d\o\3\0\j\u\r\h\6\j\w\d\r\5\n\6\u\9\o\j\i\n\n\y\9\5\g\w\d\2\4\f\m\2\u\i\p\7\r\u\e\j\y\y\b\v\x\e\q\j\x\k\9\8\4\n\7\y\i\o\k\2\4\m\a\p\d\2\0\0\z\a\g\c\l\n\9\g\n\m\g\t\y\1\1\j\9\5\8\n\r\z\s\1\w\9\j\5\h\y\k\c\t\a\q\y\c\j\w\z\3\9\5\t\m\p\q\6\l\t\t\w\v\3\p\j\h\k\i\t\r\d\f\c\q\v\2\s\o\5\d\h\8\g\b\c\j\f\y\x\h\t\2\8\h\z\d\l\s\1\j\j\c\t\v\4\o\x\e\z\i\2\6\p\h\e\f\l\k\h\t\2\6\9\u\k\1\4\m\8\b\3\p\j\x\1\s\4\r\o\a\f\0\9\8\z\e\k\o\g\w\e\3\4\6\n\d\2\1\t\0\a\6\k\d\b\a\s\0\0\e\o\s\e\z\7\q\y\h\i\o\f\y\c\b\f\m\g\h\e\v\v\i\i\c\2\i\4\p\9\e\s\t\v\r\d\5\6\s\x\0\c\o\k\v\f\i\6\a\5\a\b\p\f\7\n\f\2\i\y\0\n\5\g\x\4\9\b\m\o\v\f\0\e\n\8\4\k\n\3\8\w\j\n\r\6\0\s\2\8\n\q\a\v\0\3\v\w\l\v\0\k\b\9\z\r\c\e\h\e\3\f\o\0\r\x\i\m\a\l\q\9\o\v\n\4\r\a\s\k\5\h\2\v\n\i\2\g\1\2\q\1\1\3\n\e\f\m\q\w\t\t\s\f\6\z\1\s\x\4\i\l ]] 00:26:59.307 01:09:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:59.307 01:09:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:59.307 01:09:33 -- dd/common.sh@98 -- # xtrace_disable 00:26:59.307 01:09:33 -- common/autotest_common.sh@10 -- # set +x 00:26:59.307 01:09:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:59.307 01:09:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:59.307 [2024-11-18 01:09:33.534179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:59.307 [2024-11-18 01:09:33.534474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144969 ] 00:26:59.307 [2024-11-18 01:09:33.688241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.580 [2024-11-18 01:09:33.775619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.580  [2024-11-18T01:09:34.616Z] Copying: 512/512 [B] (average 500 kBps) 00:27:00.217 00:27:00.217 01:09:34 -- dd/posix.sh@93 -- # [[ 5m6na5zlt9dg6e3nkg9gll7cz3lgb4epvzdq0l830rbng7vevcb5ablptn4sdf2tuss7aa9k4zyj1r8hbnwrbl41d264bwpx71ptexppdilv65bgjuxn70z1c79bdr4fonin9odqf5py5p689h49lktpy1w9dg2iwg9ohjy0aqgjqmeksjvu54gko8oom9vteuf61wi4k2cv2wc0u0ebt76avlz59ks88s30sqrjpp8m3zqk7p07sjl71fh3bu1s6t32nwlaop3ygeagof3aco42f9wxwbr95zqz4q0gk0fy7wapo9osx3k7uicfzxujqjgyqg90xsro3rli6tzzz9aqc2ll02ftus03umxc050tg4d967ufbtfpulg3drlnbgs7mh3ybvedacpc6mwizi3duetf2p6be6qrm5xc8ys5g1hf3qbxkjux2xtwnh7a14qy2eng1kmxn9fshu29m1l0nvbz5etw14vwrliz0eu1gce0jmqahnjqwjh5ha0c == \5\m\6\n\a\5\z\l\t\9\d\g\6\e\3\n\k\g\9\g\l\l\7\c\z\3\l\g\b\4\e\p\v\z\d\q\0\l\8\3\0\r\b\n\g\7\v\e\v\c\b\5\a\b\l\p\t\n\4\s\d\f\2\t\u\s\s\7\a\a\9\k\4\z\y\j\1\r\8\h\b\n\w\r\b\l\4\1\d\2\6\4\b\w\p\x\7\1\p\t\e\x\p\p\d\i\l\v\6\5\b\g\j\u\x\n\7\0\z\1\c\7\9\b\d\r\4\f\o\n\i\n\9\o\d\q\f\5\p\y\5\p\6\8\9\h\4\9\l\k\t\p\y\1\w\9\d\g\2\i\w\g\9\o\h\j\y\0\a\q\g\j\q\m\e\k\s\j\v\u\5\4\g\k\o\8\o\o\m\9\v\t\e\u\f\6\1\w\i\4\k\2\c\v\2\w\c\0\u\0\e\b\t\7\6\a\v\l\z\5\9\k\s\8\8\s\3\0\s\q\r\j\p\p\8\m\3\z\q\k\7\p\0\7\s\j\l\7\1\f\h\3\b\u\1\s\6\t\3\2\n\w\l\a\o\p\3\y\g\e\a\g\o\f\3\a\c\o\4\2\f\9\w\x\w\b\r\9\5\z\q\z\4\q\0\g\k\0\f\y\7\w\a\p\o\9\o\s\x\3\k\7\u\i\c\f\z\x\u\j\q\j\g\y\q\g\9\0\x\s\r\o\3\r\l\i\6\t\z\z\z\9\a\q\c\2\l\l\0\2\f\t\u\s\0\3\u\m\x\c\0\5\0\t\g\4\d\9\6\7\u\f\b\t\f\p\u\l\g\3\d\r\l\n\b\g\s\7\m\h\3\y\b\v\e\d\a\c\p\c\6\m\w\i\z\i\3\d\u\e\t\f\2\p\6\b\e\6\q\r\m\5\x\c\8\y\s\5\g\1\h\f\3\q\b\x\k\j\u\x\2\x\t\w\n\h\7\a\1\4\q\y\2\e\n\g\1\k\m\x\n\9\f\s\h\u\2\9\m\1\l\0\n\v\b\z\5\e\t\w\1\4\v\w\r\l\i\z\0\e\u\1\g\c\e\0\j\m\q\a\h\n\j\q\w\j\h\5\h\a\0\c ]] 00:27:00.217 01:09:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:00.217 01:09:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:00.217 [2024-11-18 01:09:34.410366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:00.217 [2024-11-18 01:09:34.411258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144986 ] 00:27:00.217 [2024-11-18 01:09:34.576494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.477 [2024-11-18 01:09:34.662449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.477  [2024-11-18T01:09:35.444Z] Copying: 512/512 [B] (average 500 kBps) 00:27:01.045 00:27:01.046 01:09:35 -- dd/posix.sh@93 -- # [[ 5m6na5zlt9dg6e3nkg9gll7cz3lgb4epvzdq0l830rbng7vevcb5ablptn4sdf2tuss7aa9k4zyj1r8hbnwrbl41d264bwpx71ptexppdilv65bgjuxn70z1c79bdr4fonin9odqf5py5p689h49lktpy1w9dg2iwg9ohjy0aqgjqmeksjvu54gko8oom9vteuf61wi4k2cv2wc0u0ebt76avlz59ks88s30sqrjpp8m3zqk7p07sjl71fh3bu1s6t32nwlaop3ygeagof3aco42f9wxwbr95zqz4q0gk0fy7wapo9osx3k7uicfzxujqjgyqg90xsro3rli6tzzz9aqc2ll02ftus03umxc050tg4d967ufbtfpulg3drlnbgs7mh3ybvedacpc6mwizi3duetf2p6be6qrm5xc8ys5g1hf3qbxkjux2xtwnh7a14qy2eng1kmxn9fshu29m1l0nvbz5etw14vwrliz0eu1gce0jmqahnjqwjh5ha0c == \5\m\6\n\a\5\z\l\t\9\d\g\6\e\3\n\k\g\9\g\l\l\7\c\z\3\l\g\b\4\e\p\v\z\d\q\0\l\8\3\0\r\b\n\g\7\v\e\v\c\b\5\a\b\l\p\t\n\4\s\d\f\2\t\u\s\s\7\a\a\9\k\4\z\y\j\1\r\8\h\b\n\w\r\b\l\4\1\d\2\6\4\b\w\p\x\7\1\p\t\e\x\p\p\d\i\l\v\6\5\b\g\j\u\x\n\7\0\z\1\c\7\9\b\d\r\4\f\o\n\i\n\9\o\d\q\f\5\p\y\5\p\6\8\9\h\4\9\l\k\t\p\y\1\w\9\d\g\2\i\w\g\9\o\h\j\y\0\a\q\g\j\q\m\e\k\s\j\v\u\5\4\g\k\o\8\o\o\m\9\v\t\e\u\f\6\1\w\i\4\k\2\c\v\2\w\c\0\u\0\e\b\t\7\6\a\v\l\z\5\9\k\s\8\8\s\3\0\s\q\r\j\p\p\8\m\3\z\q\k\7\p\0\7\s\j\l\7\1\f\h\3\b\u\1\s\6\t\3\2\n\w\l\a\o\p\3\y\g\e\a\g\o\f\3\a\c\o\4\2\f\9\w\x\w\b\r\9\5\z\q\z\4\q\0\g\k\0\f\y\7\w\a\p\o\9\o\s\x\3\k\7\u\i\c\f\z\x\u\j\q\j\g\y\q\g\9\0\x\s\r\o\3\r\l\i\6\t\z\z\z\9\a\q\c\2\l\l\0\2\f\t\u\s\0\3\u\m\x\c\0\5\0\t\g\4\d\9\6\7\u\f\b\t\f\p\u\l\g\3\d\r\l\n\b\g\s\7\m\h\3\y\b\v\e\d\a\c\p\c\6\m\w\i\z\i\3\d\u\e\t\f\2\p\6\b\e\6\q\r\m\5\x\c\8\y\s\5\g\1\h\f\3\q\b\x\k\j\u\x\2\x\t\w\n\h\7\a\1\4\q\y\2\e\n\g\1\k\m\x\n\9\f\s\h\u\2\9\m\1\l\0\n\v\b\z\5\e\t\w\1\4\v\w\r\l\i\z\0\e\u\1\g\c\e\0\j\m\q\a\h\n\j\q\w\j\h\5\h\a\0\c ]] 00:27:01.046 01:09:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:01.046 01:09:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:01.046 [2024-11-18 01:09:35.300484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:01.046 [2024-11-18 01:09:35.300753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145002 ] 00:27:01.305 [2024-11-18 01:09:35.456287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.305 [2024-11-18 01:09:35.525178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.305  [2024-11-18T01:09:36.273Z] Copying: 512/512 [B] (average 166 kBps) 00:27:01.874 00:27:01.874 01:09:36 -- dd/posix.sh@93 -- # [[ 5m6na5zlt9dg6e3nkg9gll7cz3lgb4epvzdq0l830rbng7vevcb5ablptn4sdf2tuss7aa9k4zyj1r8hbnwrbl41d264bwpx71ptexppdilv65bgjuxn70z1c79bdr4fonin9odqf5py5p689h49lktpy1w9dg2iwg9ohjy0aqgjqmeksjvu54gko8oom9vteuf61wi4k2cv2wc0u0ebt76avlz59ks88s30sqrjpp8m3zqk7p07sjl71fh3bu1s6t32nwlaop3ygeagof3aco42f9wxwbr95zqz4q0gk0fy7wapo9osx3k7uicfzxujqjgyqg90xsro3rli6tzzz9aqc2ll02ftus03umxc050tg4d967ufbtfpulg3drlnbgs7mh3ybvedacpc6mwizi3duetf2p6be6qrm5xc8ys5g1hf3qbxkjux2xtwnh7a14qy2eng1kmxn9fshu29m1l0nvbz5etw14vwrliz0eu1gce0jmqahnjqwjh5ha0c == \5\m\6\n\a\5\z\l\t\9\d\g\6\e\3\n\k\g\9\g\l\l\7\c\z\3\l\g\b\4\e\p\v\z\d\q\0\l\8\3\0\r\b\n\g\7\v\e\v\c\b\5\a\b\l\p\t\n\4\s\d\f\2\t\u\s\s\7\a\a\9\k\4\z\y\j\1\r\8\h\b\n\w\r\b\l\4\1\d\2\6\4\b\w\p\x\7\1\p\t\e\x\p\p\d\i\l\v\6\5\b\g\j\u\x\n\7\0\z\1\c\7\9\b\d\r\4\f\o\n\i\n\9\o\d\q\f\5\p\y\5\p\6\8\9\h\4\9\l\k\t\p\y\1\w\9\d\g\2\i\w\g\9\o\h\j\y\0\a\q\g\j\q\m\e\k\s\j\v\u\5\4\g\k\o\8\o\o\m\9\v\t\e\u\f\6\1\w\i\4\k\2\c\v\2\w\c\0\u\0\e\b\t\7\6\a\v\l\z\5\9\k\s\8\8\s\3\0\s\q\r\j\p\p\8\m\3\z\q\k\7\p\0\7\s\j\l\7\1\f\h\3\b\u\1\s\6\t\3\2\n\w\l\a\o\p\3\y\g\e\a\g\o\f\3\a\c\o\4\2\f\9\w\x\w\b\r\9\5\z\q\z\4\q\0\g\k\0\f\y\7\w\a\p\o\9\o\s\x\3\k\7\u\i\c\f\z\x\u\j\q\j\g\y\q\g\9\0\x\s\r\o\3\r\l\i\6\t\z\z\z\9\a\q\c\2\l\l\0\2\f\t\u\s\0\3\u\m\x\c\0\5\0\t\g\4\d\9\6\7\u\f\b\t\f\p\u\l\g\3\d\r\l\n\b\g\s\7\m\h\3\y\b\v\e\d\a\c\p\c\6\m\w\i\z\i\3\d\u\e\t\f\2\p\6\b\e\6\q\r\m\5\x\c\8\y\s\5\g\1\h\f\3\q\b\x\k\j\u\x\2\x\t\w\n\h\7\a\1\4\q\y\2\e\n\g\1\k\m\x\n\9\f\s\h\u\2\9\m\1\l\0\n\v\b\z\5\e\t\w\1\4\v\w\r\l\i\z\0\e\u\1\g\c\e\0\j\m\q\a\h\n\j\q\w\j\h\5\h\a\0\c ]] 00:27:01.874 01:09:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:01.874 01:09:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:01.874 [2024-11-18 01:09:36.135352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:01.874 [2024-11-18 01:09:36.135734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145015 ] 00:27:02.134 [2024-11-18 01:09:36.289073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.134 [2024-11-18 01:09:36.365805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.134  [2024-11-18T01:09:37.103Z] Copying: 512/512 [B] (average 35 kBps) 00:27:02.705 00:27:02.705 01:09:36 -- dd/posix.sh@93 -- # [[ 5m6na5zlt9dg6e3nkg9gll7cz3lgb4epvzdq0l830rbng7vevcb5ablptn4sdf2tuss7aa9k4zyj1r8hbnwrbl41d264bwpx71ptexppdilv65bgjuxn70z1c79bdr4fonin9odqf5py5p689h49lktpy1w9dg2iwg9ohjy0aqgjqmeksjvu54gko8oom9vteuf61wi4k2cv2wc0u0ebt76avlz59ks88s30sqrjpp8m3zqk7p07sjl71fh3bu1s6t32nwlaop3ygeagof3aco42f9wxwbr95zqz4q0gk0fy7wapo9osx3k7uicfzxujqjgyqg90xsro3rli6tzzz9aqc2ll02ftus03umxc050tg4d967ufbtfpulg3drlnbgs7mh3ybvedacpc6mwizi3duetf2p6be6qrm5xc8ys5g1hf3qbxkjux2xtwnh7a14qy2eng1kmxn9fshu29m1l0nvbz5etw14vwrliz0eu1gce0jmqahnjqwjh5ha0c == \5\m\6\n\a\5\z\l\t\9\d\g\6\e\3\n\k\g\9\g\l\l\7\c\z\3\l\g\b\4\e\p\v\z\d\q\0\l\8\3\0\r\b\n\g\7\v\e\v\c\b\5\a\b\l\p\t\n\4\s\d\f\2\t\u\s\s\7\a\a\9\k\4\z\y\j\1\r\8\h\b\n\w\r\b\l\4\1\d\2\6\4\b\w\p\x\7\1\p\t\e\x\p\p\d\i\l\v\6\5\b\g\j\u\x\n\7\0\z\1\c\7\9\b\d\r\4\f\o\n\i\n\9\o\d\q\f\5\p\y\5\p\6\8\9\h\4\9\l\k\t\p\y\1\w\9\d\g\2\i\w\g\9\o\h\j\y\0\a\q\g\j\q\m\e\k\s\j\v\u\5\4\g\k\o\8\o\o\m\9\v\t\e\u\f\6\1\w\i\4\k\2\c\v\2\w\c\0\u\0\e\b\t\7\6\a\v\l\z\5\9\k\s\8\8\s\3\0\s\q\r\j\p\p\8\m\3\z\q\k\7\p\0\7\s\j\l\7\1\f\h\3\b\u\1\s\6\t\3\2\n\w\l\a\o\p\3\y\g\e\a\g\o\f\3\a\c\o\4\2\f\9\w\x\w\b\r\9\5\z\q\z\4\q\0\g\k\0\f\y\7\w\a\p\o\9\o\s\x\3\k\7\u\i\c\f\z\x\u\j\q\j\g\y\q\g\9\0\x\s\r\o\3\r\l\i\6\t\z\z\z\9\a\q\c\2\l\l\0\2\f\t\u\s\0\3\u\m\x\c\0\5\0\t\g\4\d\9\6\7\u\f\b\t\f\p\u\l\g\3\d\r\l\n\b\g\s\7\m\h\3\y\b\v\e\d\a\c\p\c\6\m\w\i\z\i\3\d\u\e\t\f\2\p\6\b\e\6\q\r\m\5\x\c\8\y\s\5\g\1\h\f\3\q\b\x\k\j\u\x\2\x\t\w\n\h\7\a\1\4\q\y\2\e\n\g\1\k\m\x\n\9\f\s\h\u\2\9\m\1\l\0\n\v\b\z\5\e\t\w\1\4\v\w\r\l\i\z\0\e\u\1\g\c\e\0\j\m\q\a\h\n\j\q\w\j\h\5\h\a\0\c ]] 00:27:02.705 00:27:02.705 real 0m6.879s 00:27:02.705 user 0m3.557s 00:27:02.705 sys 0m2.207s 00:27:02.705 01:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:02.705 01:09:36 -- common/autotest_common.sh@10 -- # set +x 00:27:02.705 ************************************ 00:27:02.705 END TEST dd_flags_misc_forced_aio 00:27:02.705 ************************************ 00:27:02.705 01:09:36 -- dd/posix.sh@1 -- # cleanup 00:27:02.705 01:09:36 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:02.705 01:09:36 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:02.705 00:27:02.705 real 0m30.011s 00:27:02.705 user 0m14.662s 00:27:02.705 sys 0m9.231s 00:27:02.705 01:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:02.705 01:09:37 -- common/autotest_common.sh@10 -- # set +x 00:27:02.705 ************************************ 00:27:02.705 END TEST spdk_dd_posix 00:27:02.705 ************************************ 00:27:02.705 01:09:37 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:27:02.705 01:09:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:02.705 01:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.705 01:09:37 -- common/autotest_common.sh@10 -- # set +x 00:27:02.705 ************************************ 00:27:02.705 START TEST spdk_dd_malloc 00:27:02.705 ************************************ 00:27:02.705 01:09:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:27:02.965 * Looking for test storage... 00:27:02.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:02.965 01:09:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:02.965 01:09:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:02.965 01:09:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:02.965 01:09:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:02.965 01:09:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:02.965 01:09:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:02.965 01:09:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:02.965 01:09:37 -- scripts/common.sh@335 -- # IFS=.-: 00:27:02.965 01:09:37 -- scripts/common.sh@335 -- # read -ra ver1 00:27:02.965 01:09:37 -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.965 01:09:37 -- scripts/common.sh@336 -- # read -ra ver2 00:27:02.965 01:09:37 -- scripts/common.sh@337 -- # local 'op=<' 00:27:02.965 01:09:37 -- scripts/common.sh@339 -- # ver1_l=2 00:27:02.965 01:09:37 -- scripts/common.sh@340 -- # ver2_l=1 00:27:02.965 01:09:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:02.965 01:09:37 -- scripts/common.sh@343 -- # case "$op" in 00:27:02.965 01:09:37 -- scripts/common.sh@344 -- # : 1 00:27:02.965 01:09:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:02.965 01:09:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.965 01:09:37 -- scripts/common.sh@364 -- # decimal 1 00:27:02.965 01:09:37 -- scripts/common.sh@352 -- # local d=1 00:27:02.965 01:09:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.965 01:09:37 -- scripts/common.sh@354 -- # echo 1 00:27:02.965 01:09:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:02.965 01:09:37 -- scripts/common.sh@365 -- # decimal 2 00:27:02.965 01:09:37 -- scripts/common.sh@352 -- # local d=2 00:27:02.965 01:09:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.965 01:09:37 -- scripts/common.sh@354 -- # echo 2 00:27:02.965 01:09:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:02.965 01:09:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:02.965 01:09:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:02.965 01:09:37 -- scripts/common.sh@367 -- # return 0 00:27:02.965 01:09:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.965 01:09:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:02.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.965 --rc genhtml_branch_coverage=1 00:27:02.965 --rc genhtml_function_coverage=1 00:27:02.965 --rc genhtml_legend=1 00:27:02.965 --rc geninfo_all_blocks=1 00:27:02.965 --rc geninfo_unexecuted_blocks=1 00:27:02.965 00:27:02.965 ' 00:27:02.965 01:09:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:02.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.965 --rc genhtml_branch_coverage=1 00:27:02.965 --rc genhtml_function_coverage=1 00:27:02.965 --rc genhtml_legend=1 00:27:02.965 --rc geninfo_all_blocks=1 00:27:02.965 --rc geninfo_unexecuted_blocks=1 00:27:02.965 00:27:02.965 ' 00:27:02.965 01:09:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:02.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.965 --rc genhtml_branch_coverage=1 00:27:02.965 --rc genhtml_function_coverage=1 00:27:02.965 --rc genhtml_legend=1 00:27:02.965 --rc geninfo_all_blocks=1 00:27:02.965 --rc geninfo_unexecuted_blocks=1 00:27:02.965 00:27:02.965 ' 00:27:02.965 01:09:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:02.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.965 --rc genhtml_branch_coverage=1 00:27:02.965 --rc genhtml_function_coverage=1 00:27:02.965 --rc genhtml_legend=1 00:27:02.965 --rc geninfo_all_blocks=1 00:27:02.965 --rc geninfo_unexecuted_blocks=1 00:27:02.965 00:27:02.965 ' 00:27:02.965 01:09:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:02.965 01:09:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.965 01:09:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.965 01:09:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.965 01:09:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.965 01:09:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.965 01:09:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.965 01:09:37 -- paths/export.sh@5 -- # export PATH 00:27:02.965 01:09:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.965 01:09:37 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:27:02.965 01:09:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:02.965 01:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.965 01:09:37 -- common/autotest_common.sh@10 -- # set +x 00:27:02.965 ************************************ 00:27:02.965 START TEST dd_malloc_copy 00:27:02.965 ************************************ 00:27:02.965 01:09:37 -- common/autotest_common.sh@1114 -- # malloc_copy 00:27:02.965 01:09:37 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:27:02.965 01:09:37 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:27:02.965 01:09:37 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:27:02.965 01:09:37 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:27:02.965 01:09:37 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:27:02.965 01:09:37 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:27:02.965 01:09:37 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:27:02.965 01:09:37 -- dd/malloc.sh@28 -- # gen_conf 00:27:02.965 01:09:37 -- dd/common.sh@31 -- # xtrace_disable 00:27:02.965 01:09:37 -- common/autotest_common.sh@10 -- # set +x 00:27:02.965 { 00:27:02.965 "subsystems": [ 00:27:02.965 { 00:27:02.965 "subsystem": "bdev", 00:27:02.965 "config": [ 00:27:02.965 { 00:27:02.965 "params": { 00:27:02.965 "block_size": 512, 00:27:02.965 "num_blocks": 1048576, 00:27:02.965 "name": "malloc0" 00:27:02.965 }, 00:27:02.965 "method": "bdev_malloc_create" 00:27:02.965 }, 00:27:02.965 { 00:27:02.965 "params": { 00:27:02.965 "block_size": 512, 00:27:02.965 "num_blocks": 1048576, 00:27:02.965 "name": "malloc1" 00:27:02.965 }, 00:27:02.965 "method": "bdev_malloc_create" 00:27:02.965 }, 00:27:02.965 { 00:27:02.965 "method": "bdev_wait_for_examine" 00:27:02.965 } 00:27:02.965 ] 00:27:02.965 } 00:27:02.965 ] 00:27:02.965 } 00:27:02.965 [2024-11-18 01:09:37.318793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:02.965 [2024-11-18 01:09:37.319027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145105 ] 00:27:03.224 [2024-11-18 01:09:37.475515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.224 [2024-11-18 01:09:37.549018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.130  [2024-11-18T01:09:40.096Z] Copying: 232/512 [MB] (232 MBps) [2024-11-18T01:09:40.355Z] Copying: 464/512 [MB] (231 MBps) [2024-11-18T01:09:41.291Z] Copying: 512/512 [MB] (average 231 MBps) 00:27:06.892 00:27:06.892 01:09:41 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:27:06.892 01:09:41 -- dd/malloc.sh@33 -- # gen_conf 00:27:06.892 01:09:41 -- dd/common.sh@31 -- # xtrace_disable 00:27:06.892 01:09:41 -- common/autotest_common.sh@10 -- # set +x 00:27:07.151 { 00:27:07.151 "subsystems": [ 00:27:07.151 { 00:27:07.151 "subsystem": "bdev", 00:27:07.151 "config": [ 00:27:07.151 { 00:27:07.151 "params": { 00:27:07.151 "block_size": 512, 00:27:07.151 "num_blocks": 1048576, 00:27:07.151 "name": "malloc0" 00:27:07.151 }, 00:27:07.151 "method": "bdev_malloc_create" 00:27:07.151 }, 00:27:07.151 { 00:27:07.151 "params": { 00:27:07.151 "block_size": 512, 00:27:07.151 "num_blocks": 1048576, 00:27:07.151 "name": "malloc1" 00:27:07.151 }, 00:27:07.151 "method": "bdev_malloc_create" 00:27:07.151 }, 00:27:07.151 { 00:27:07.151 "method": "bdev_wait_for_examine" 00:27:07.151 } 00:27:07.151 ] 00:27:07.151 } 00:27:07.151 ] 00:27:07.151 } 00:27:07.151 [2024-11-18 01:09:41.348363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:07.151 [2024-11-18 01:09:41.348620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145168 ] 00:27:07.151 [2024-11-18 01:09:41.504251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.410 [2024-11-18 01:09:41.572119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.788  [2024-11-18T01:09:44.123Z] Copying: 232/512 [MB] (232 MBps) [2024-11-18T01:09:44.381Z] Copying: 466/512 [MB] (234 MBps) [2024-11-18T01:09:45.324Z] Copying: 512/512 [MB] (average 233 MBps) 00:27:10.925 00:27:10.925 00:27:10.925 real 0m8.035s 00:27:10.925 user 0m6.527s 00:27:10.925 sys 0m1.371s 00:27:10.925 01:09:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:10.925 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:10.925 ************************************ 00:27:10.925 END TEST dd_malloc_copy 00:27:10.925 ************************************ 00:27:11.184 00:27:11.184 real 0m8.279s 00:27:11.184 user 0m6.655s 00:27:11.184 sys 0m1.506s 00:27:11.184 01:09:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:11.184 ************************************ 00:27:11.184 END TEST spdk_dd_malloc 00:27:11.184 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:11.184 ************************************ 00:27:11.184 01:09:45 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:27:11.184 01:09:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:11.184 01:09:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:11.184 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:11.184 ************************************ 00:27:11.184 START TEST spdk_dd_bdev_to_bdev 00:27:11.184 ************************************ 00:27:11.184 01:09:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:27:11.184 * Looking for test storage... 00:27:11.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:11.184 01:09:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:11.184 01:09:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:11.184 01:09:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:11.444 01:09:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:11.444 01:09:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:11.444 01:09:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:11.444 01:09:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:11.444 01:09:45 -- scripts/common.sh@335 -- # IFS=.-: 00:27:11.444 01:09:45 -- scripts/common.sh@335 -- # read -ra ver1 00:27:11.444 01:09:45 -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.444 01:09:45 -- scripts/common.sh@336 -- # read -ra ver2 00:27:11.444 01:09:45 -- scripts/common.sh@337 -- # local 'op=<' 00:27:11.444 01:09:45 -- scripts/common.sh@339 -- # ver1_l=2 00:27:11.444 01:09:45 -- scripts/common.sh@340 -- # ver2_l=1 00:27:11.444 01:09:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:11.444 01:09:45 -- scripts/common.sh@343 -- # case "$op" in 00:27:11.444 01:09:45 -- scripts/common.sh@344 -- # : 1 00:27:11.444 01:09:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:11.444 01:09:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.444 01:09:45 -- scripts/common.sh@364 -- # decimal 1 00:27:11.444 01:09:45 -- scripts/common.sh@352 -- # local d=1 00:27:11.444 01:09:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.444 01:09:45 -- scripts/common.sh@354 -- # echo 1 00:27:11.444 01:09:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:11.444 01:09:45 -- scripts/common.sh@365 -- # decimal 2 00:27:11.444 01:09:45 -- scripts/common.sh@352 -- # local d=2 00:27:11.444 01:09:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.444 01:09:45 -- scripts/common.sh@354 -- # echo 2 00:27:11.444 01:09:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:11.444 01:09:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:11.444 01:09:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:11.444 01:09:45 -- scripts/common.sh@367 -- # return 0 00:27:11.444 01:09:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.444 01:09:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.444 --rc genhtml_branch_coverage=1 00:27:11.444 --rc genhtml_function_coverage=1 00:27:11.444 --rc genhtml_legend=1 00:27:11.444 --rc geninfo_all_blocks=1 00:27:11.444 --rc geninfo_unexecuted_blocks=1 00:27:11.444 00:27:11.444 ' 00:27:11.444 01:09:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.444 --rc genhtml_branch_coverage=1 00:27:11.444 --rc genhtml_function_coverage=1 00:27:11.444 --rc genhtml_legend=1 00:27:11.444 --rc geninfo_all_blocks=1 00:27:11.444 --rc geninfo_unexecuted_blocks=1 00:27:11.444 00:27:11.444 ' 00:27:11.444 01:09:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.444 --rc genhtml_branch_coverage=1 00:27:11.444 --rc genhtml_function_coverage=1 00:27:11.444 --rc genhtml_legend=1 00:27:11.444 --rc geninfo_all_blocks=1 00:27:11.444 --rc geninfo_unexecuted_blocks=1 00:27:11.444 00:27:11.444 ' 00:27:11.444 01:09:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.444 --rc genhtml_branch_coverage=1 00:27:11.444 --rc genhtml_function_coverage=1 00:27:11.444 --rc genhtml_legend=1 00:27:11.444 --rc geninfo_all_blocks=1 00:27:11.444 --rc geninfo_unexecuted_blocks=1 00:27:11.444 00:27:11.444 ' 00:27:11.445 01:09:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:11.445 01:09:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.445 01:09:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.445 01:09:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.445 01:09:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:11.445 01:09:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:11.445 01:09:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:11.445 01:09:45 -- paths/export.sh@5 -- # export PATH 00:27:11.445 01:09:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:27:11.445 01:09:45 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:27:11.445 [2024-11-18 01:09:45.675670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:11.445 [2024-11-18 01:09:45.675939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145287 ] 00:27:11.445 [2024-11-18 01:09:45.832326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.704 [2024-11-18 01:09:45.899539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.963  [2024-11-18T01:09:46.931Z] Copying: 256/256 [MB] (average 973 MBps) 00:27:12.532 00:27:12.532 01:09:46 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:12.532 01:09:46 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:12.532 01:09:46 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:27:12.532 01:09:46 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:27:12.532 01:09:46 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:12.532 01:09:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:12.532 01:09:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:12.532 01:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:12.532 ************************************ 00:27:12.532 START TEST dd_inflate_file 00:27:12.532 ************************************ 00:27:12.532 01:09:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:12.532 [2024-11-18 01:09:46.783044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:12.532 [2024-11-18 01:09:46.783312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145310 ] 00:27:12.791 [2024-11-18 01:09:46.939866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.791 [2024-11-18 01:09:47.006164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.050  [2024-11-18T01:09:47.709Z] Copying: 64/64 [MB] (average 688 MBps) 00:27:13.310 00:27:13.310 00:27:13.310 real 0m0.927s 00:27:13.310 user 0m0.423s 00:27:13.310 sys 0m0.366s 00:27:13.310 01:09:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:13.310 ************************************ 00:27:13.310 END TEST dd_inflate_file 00:27:13.310 ************************************ 00:27:13.310 01:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:13.310 01:09:47 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:27:13.310 01:09:47 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:27:13.310 01:09:47 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:13.310 01:09:47 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:27:13.310 01:09:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:27:13.310 01:09:47 -- dd/common.sh@31 -- # xtrace_disable 00:27:13.310 01:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:13.310 01:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:13.310 01:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:13.569 ************************************ 00:27:13.569 START TEST dd_copy_to_out_bdev 00:27:13.569 ************************************ 00:27:13.569 01:09:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:13.569 { 00:27:13.569 "subsystems": [ 00:27:13.569 { 00:27:13.569 "subsystem": "bdev", 00:27:13.569 "config": [ 00:27:13.569 { 00:27:13.569 "params": { 00:27:13.569 "block_size": 4096, 00:27:13.569 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:13.569 "name": "aio1" 00:27:13.569 }, 00:27:13.569 "method": "bdev_aio_create" 00:27:13.569 }, 00:27:13.569 { 00:27:13.569 "params": { 00:27:13.569 "trtype": "pcie", 00:27:13.569 "traddr": "0000:00:06.0", 00:27:13.569 "name": "Nvme0" 00:27:13.569 }, 00:27:13.569 "method": "bdev_nvme_attach_controller" 00:27:13.569 }, 00:27:13.569 { 00:27:13.569 "method": "bdev_wait_for_examine" 00:27:13.569 } 00:27:13.569 ] 00:27:13.569 } 00:27:13.569 ] 00:27:13.569 } 00:27:13.569 [2024-11-18 01:09:47.783110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:13.569 [2024-11-18 01:09:47.783379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145357 ] 00:27:13.569 [2024-11-18 01:09:47.939764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.829 [2024-11-18 01:09:48.007487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.766  [2024-11-18T01:09:49.731Z] Copying: 64/64 [MB] (average 80 MBps) 00:27:15.332 00:27:15.332 00:27:15.332 real 0m1.803s 00:27:15.332 user 0m1.353s 00:27:15.332 sys 0m0.330s 00:27:15.332 01:09:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:15.332 ************************************ 00:27:15.332 END TEST dd_copy_to_out_bdev 00:27:15.332 01:09:49 -- common/autotest_common.sh@10 -- # set +x 00:27:15.332 ************************************ 00:27:15.332 01:09:49 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:27:15.332 01:09:49 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:27:15.332 01:09:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:15.332 01:09:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:15.332 01:09:49 -- common/autotest_common.sh@10 -- # set +x 00:27:15.332 ************************************ 00:27:15.332 START TEST dd_offset_magic 00:27:15.332 ************************************ 00:27:15.332 01:09:49 -- common/autotest_common.sh@1114 -- # offset_magic 00:27:15.332 01:09:49 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:27:15.332 01:09:49 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:27:15.332 01:09:49 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:27:15.332 01:09:49 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:15.333 01:09:49 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:27:15.333 01:09:49 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:15.333 01:09:49 -- dd/common.sh@31 -- # xtrace_disable 00:27:15.333 01:09:49 -- common/autotest_common.sh@10 -- # set +x 00:27:15.333 [2024-11-18 01:09:49.653560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:15.333 [2024-11-18 01:09:49.653768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145405 ] 00:27:15.333 { 00:27:15.333 "subsystems": [ 00:27:15.333 { 00:27:15.333 "subsystem": "bdev", 00:27:15.333 "config": [ 00:27:15.333 { 00:27:15.333 "params": { 00:27:15.333 "block_size": 4096, 00:27:15.333 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:15.333 "name": "aio1" 00:27:15.333 }, 00:27:15.333 "method": "bdev_aio_create" 00:27:15.333 }, 00:27:15.333 { 00:27:15.333 "params": { 00:27:15.333 "trtype": "pcie", 00:27:15.333 "traddr": "0000:00:06.0", 00:27:15.333 "name": "Nvme0" 00:27:15.333 }, 00:27:15.333 "method": "bdev_nvme_attach_controller" 00:27:15.333 }, 00:27:15.333 { 00:27:15.333 "method": "bdev_wait_for_examine" 00:27:15.333 } 00:27:15.333 ] 00:27:15.333 } 00:27:15.333 ] 00:27:15.333 } 00:27:15.595 [2024-11-18 01:09:49.796484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.595 [2024-11-18 01:09:49.862857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.193  [2024-11-18T01:09:51.175Z] Copying: 65/65 [MB] (average 145 MBps) 00:27:16.776 00:27:16.776 01:09:51 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:27:16.776 01:09:51 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:16.776 01:09:51 -- dd/common.sh@31 -- # xtrace_disable 00:27:16.776 01:09:51 -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 { 00:27:16.776 "subsystems": [ 00:27:16.776 { 00:27:16.776 "subsystem": "bdev", 00:27:16.776 "config": [ 00:27:16.776 { 00:27:16.776 "params": { 00:27:16.776 "block_size": 4096, 00:27:16.776 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:16.776 "name": "aio1" 00:27:16.776 }, 00:27:16.776 "method": "bdev_aio_create" 00:27:16.776 }, 00:27:16.776 { 00:27:16.776 "params": { 00:27:16.776 "trtype": "pcie", 00:27:16.776 "traddr": "0000:00:06.0", 00:27:16.776 "name": "Nvme0" 00:27:16.776 }, 00:27:16.776 "method": "bdev_nvme_attach_controller" 00:27:16.776 }, 00:27:16.776 { 00:27:16.776 "method": "bdev_wait_for_examine" 00:27:16.776 } 00:27:16.777 ] 00:27:16.777 } 00:27:16.777 ] 00:27:16.777 } 00:27:16.777 [2024-11-18 01:09:51.092717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:16.777 [2024-11-18 01:09:51.092998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145431 ] 00:27:17.036 [2024-11-18 01:09:51.250097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.036 [2024-11-18 01:09:51.332639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.295  [2024-11-18T01:09:52.263Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:27:17.864 00:27:17.864 01:09:52 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:17.864 01:09:52 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:17.864 01:09:52 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:17.864 01:09:52 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:27:17.864 01:09:52 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:17.864 01:09:52 -- dd/common.sh@31 -- # xtrace_disable 00:27:17.864 01:09:52 -- common/autotest_common.sh@10 -- # set +x 00:27:17.864 { 00:27:17.864 "subsystems": [ 00:27:17.864 { 00:27:17.864 "subsystem": "bdev", 00:27:17.864 "config": [ 00:27:17.864 { 00:27:17.864 "params": { 00:27:17.864 "block_size": 4096, 00:27:17.864 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:17.864 "name": "aio1" 00:27:17.864 }, 00:27:17.864 "method": "bdev_aio_create" 00:27:17.864 }, 00:27:17.864 { 00:27:17.864 "params": { 00:27:17.864 "trtype": "pcie", 00:27:17.864 "traddr": "0000:00:06.0", 00:27:17.864 "name": "Nvme0" 00:27:17.864 }, 00:27:17.864 "method": "bdev_nvme_attach_controller" 00:27:17.864 }, 00:27:17.864 { 00:27:17.864 "method": "bdev_wait_for_examine" 00:27:17.864 } 00:27:17.864 ] 00:27:17.864 } 00:27:17.864 ] 00:27:17.864 } 00:27:17.864 [2024-11-18 01:09:52.109109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:17.864 [2024-11-18 01:09:52.109376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145453 ] 00:27:17.864 [2024-11-18 01:09:52.263939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.124 [2024-11-18 01:09:52.332938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.692  [2024-11-18T01:09:53.350Z] Copying: 65/65 [MB] (average 180 MBps) 00:27:18.951 00:27:19.210 01:09:53 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:27:19.210 01:09:53 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:19.210 01:09:53 -- dd/common.sh@31 -- # xtrace_disable 00:27:19.210 01:09:53 -- common/autotest_common.sh@10 -- # set +x 00:27:19.210 [2024-11-18 01:09:53.406429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:19.210 [2024-11-18 01:09:53.406652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145482 ] 00:27:19.210 { 00:27:19.210 "subsystems": [ 00:27:19.210 { 00:27:19.210 "subsystem": "bdev", 00:27:19.210 "config": [ 00:27:19.210 { 00:27:19.210 "params": { 00:27:19.210 "block_size": 4096, 00:27:19.210 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:19.210 "name": "aio1" 00:27:19.210 }, 00:27:19.210 "method": "bdev_aio_create" 00:27:19.210 }, 00:27:19.210 { 00:27:19.210 "params": { 00:27:19.210 "trtype": "pcie", 00:27:19.210 "traddr": "0000:00:06.0", 00:27:19.210 "name": "Nvme0" 00:27:19.210 }, 00:27:19.210 "method": "bdev_nvme_attach_controller" 00:27:19.210 }, 00:27:19.210 { 00:27:19.210 "method": "bdev_wait_for_examine" 00:27:19.210 } 00:27:19.210 ] 00:27:19.210 } 00:27:19.210 ] 00:27:19.210 } 00:27:19.210 [2024-11-18 01:09:53.552740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.470 [2024-11-18 01:09:53.625301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.470  [2024-11-18T01:09:54.438Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:27:20.039 00:27:20.039 01:09:54 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:20.039 01:09:54 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:20.039 00:27:20.039 real 0m4.729s 00:27:20.039 user 0m2.346s 00:27:20.039 sys 0m1.272s 00:27:20.039 01:09:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:20.039 ************************************ 00:27:20.039 END TEST dd_offset_magic 00:27:20.039 ************************************ 00:27:20.039 01:09:54 -- common/autotest_common.sh@10 -- # set +x 00:27:20.039 01:09:54 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:27:20.039 01:09:54 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:27:20.039 01:09:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:20.039 01:09:54 -- dd/common.sh@11 -- # local nvme_ref= 00:27:20.039 01:09:54 -- dd/common.sh@12 -- # local size=4194330 00:27:20.039 01:09:54 -- dd/common.sh@14 -- # local bs=1048576 00:27:20.039 01:09:54 -- dd/common.sh@15 -- # local count=5 00:27:20.039 01:09:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:27:20.039 01:09:54 -- dd/common.sh@18 -- # gen_conf 00:27:20.039 01:09:54 -- dd/common.sh@31 -- # xtrace_disable 00:27:20.039 01:09:54 -- common/autotest_common.sh@10 -- # set +x 00:27:20.039 [2024-11-18 01:09:54.435317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:20.039 [2024-11-18 01:09:54.435500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145519 ] 00:27:20.039 { 00:27:20.039 "subsystems": [ 00:27:20.039 { 00:27:20.039 "subsystem": "bdev", 00:27:20.039 "config": [ 00:27:20.039 { 00:27:20.039 "params": { 00:27:20.039 "block_size": 4096, 00:27:20.039 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:20.039 "name": "aio1" 00:27:20.039 }, 00:27:20.039 "method": "bdev_aio_create" 00:27:20.039 }, 00:27:20.039 { 00:27:20.039 "params": { 00:27:20.039 "trtype": "pcie", 00:27:20.039 "traddr": "0000:00:06.0", 00:27:20.039 "name": "Nvme0" 00:27:20.039 }, 00:27:20.039 "method": "bdev_nvme_attach_controller" 00:27:20.039 }, 00:27:20.039 { 00:27:20.039 "method": "bdev_wait_for_examine" 00:27:20.039 } 00:27:20.039 ] 00:27:20.039 } 00:27:20.039 ] 00:27:20.039 } 00:27:20.299 [2024-11-18 01:09:54.576909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.299 [2024-11-18 01:09:54.643916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.558  [2024-11-18T01:09:55.526Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:27:21.127 00:27:21.127 01:09:55 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:27:21.127 01:09:55 -- dd/common.sh@10 -- # local bdev=aio1 00:27:21.127 01:09:55 -- dd/common.sh@11 -- # local nvme_ref= 00:27:21.127 01:09:55 -- dd/common.sh@12 -- # local size=4194330 00:27:21.127 01:09:55 -- dd/common.sh@14 -- # local bs=1048576 00:27:21.127 01:09:55 -- dd/common.sh@15 -- # local count=5 00:27:21.127 01:09:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:27:21.127 01:09:55 -- dd/common.sh@18 -- # gen_conf 00:27:21.127 01:09:55 -- dd/common.sh@31 -- # xtrace_disable 00:27:21.127 01:09:55 -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 { 00:27:21.127 "subsystems": [ 00:27:21.127 { 00:27:21.127 "subsystem": "bdev", 00:27:21.127 "config": [ 00:27:21.127 { 00:27:21.127 "params": { 00:27:21.127 "block_size": 4096, 00:27:21.128 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:21.128 "name": "aio1" 00:27:21.128 }, 00:27:21.128 "method": "bdev_aio_create" 00:27:21.128 }, 00:27:21.128 { 00:27:21.128 "params": { 00:27:21.128 "trtype": "pcie", 00:27:21.128 "traddr": "0000:00:06.0", 00:27:21.128 "name": "Nvme0" 00:27:21.128 }, 00:27:21.128 "method": "bdev_nvme_attach_controller" 00:27:21.128 }, 00:27:21.128 { 00:27:21.128 "method": "bdev_wait_for_examine" 00:27:21.128 } 00:27:21.128 ] 00:27:21.128 } 00:27:21.128 ] 00:27:21.128 } 00:27:21.128 [2024-11-18 01:09:55.377469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:21.128 [2024-11-18 01:09:55.377734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145541 ] 00:27:21.387 [2024-11-18 01:09:55.532699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.387 [2024-11-18 01:09:55.598827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.647  [2024-11-18T01:09:56.305Z] Copying: 5120/5120 [kB] (average 192 MBps) 00:27:21.906 00:27:22.166 01:09:56 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:22.166 00:27:22.166 real 0m10.985s 00:27:22.166 user 0m5.860s 00:27:22.166 sys 0m3.350s 00:27:22.166 01:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:22.166 ************************************ 00:27:22.166 01:09:56 -- common/autotest_common.sh@10 -- # set +x 00:27:22.166 END TEST spdk_dd_bdev_to_bdev 00:27:22.166 ************************************ 00:27:22.167 01:09:56 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:27:22.167 01:09:56 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:22.167 01:09:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:22.167 01:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:22.167 01:09:56 -- common/autotest_common.sh@10 -- # set +x 00:27:22.167 ************************************ 00:27:22.167 START TEST spdk_dd_sparse 00:27:22.167 ************************************ 00:27:22.167 01:09:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:22.167 * Looking for test storage... 00:27:22.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:22.167 01:09:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:22.167 01:09:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:22.167 01:09:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:22.427 01:09:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:22.427 01:09:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:22.427 01:09:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:22.427 01:09:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:22.427 01:09:56 -- scripts/common.sh@335 -- # IFS=.-: 00:27:22.427 01:09:56 -- scripts/common.sh@335 -- # read -ra ver1 00:27:22.427 01:09:56 -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.427 01:09:56 -- scripts/common.sh@336 -- # read -ra ver2 00:27:22.427 01:09:56 -- scripts/common.sh@337 -- # local 'op=<' 00:27:22.427 01:09:56 -- scripts/common.sh@339 -- # ver1_l=2 00:27:22.427 01:09:56 -- scripts/common.sh@340 -- # ver2_l=1 00:27:22.427 01:09:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:22.427 01:09:56 -- scripts/common.sh@343 -- # case "$op" in 00:27:22.427 01:09:56 -- scripts/common.sh@344 -- # : 1 00:27:22.427 01:09:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:22.427 01:09:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.427 01:09:56 -- scripts/common.sh@364 -- # decimal 1 00:27:22.427 01:09:56 -- scripts/common.sh@352 -- # local d=1 00:27:22.427 01:09:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.427 01:09:56 -- scripts/common.sh@354 -- # echo 1 00:27:22.427 01:09:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:22.427 01:09:56 -- scripts/common.sh@365 -- # decimal 2 00:27:22.427 01:09:56 -- scripts/common.sh@352 -- # local d=2 00:27:22.427 01:09:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.427 01:09:56 -- scripts/common.sh@354 -- # echo 2 00:27:22.427 01:09:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:22.427 01:09:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:22.427 01:09:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:22.427 01:09:56 -- scripts/common.sh@367 -- # return 0 00:27:22.427 01:09:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.427 01:09:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.427 --rc genhtml_branch_coverage=1 00:27:22.427 --rc genhtml_function_coverage=1 00:27:22.427 --rc genhtml_legend=1 00:27:22.427 --rc geninfo_all_blocks=1 00:27:22.427 --rc geninfo_unexecuted_blocks=1 00:27:22.427 00:27:22.427 ' 00:27:22.427 01:09:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.427 --rc genhtml_branch_coverage=1 00:27:22.427 --rc genhtml_function_coverage=1 00:27:22.427 --rc genhtml_legend=1 00:27:22.427 --rc geninfo_all_blocks=1 00:27:22.427 --rc geninfo_unexecuted_blocks=1 00:27:22.427 00:27:22.427 ' 00:27:22.427 01:09:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.427 --rc genhtml_branch_coverage=1 00:27:22.427 --rc genhtml_function_coverage=1 00:27:22.427 --rc genhtml_legend=1 00:27:22.427 --rc geninfo_all_blocks=1 00:27:22.427 --rc geninfo_unexecuted_blocks=1 00:27:22.427 00:27:22.427 ' 00:27:22.427 01:09:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.427 --rc genhtml_branch_coverage=1 00:27:22.427 --rc genhtml_function_coverage=1 00:27:22.427 --rc genhtml_legend=1 00:27:22.427 --rc geninfo_all_blocks=1 00:27:22.427 --rc geninfo_unexecuted_blocks=1 00:27:22.427 00:27:22.427 ' 00:27:22.427 01:09:56 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:22.427 01:09:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.427 01:09:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.427 01:09:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.427 01:09:56 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:22.427 01:09:56 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:22.427 01:09:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:22.427 01:09:56 -- paths/export.sh@5 -- # export PATH 00:27:22.427 01:09:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:22.427 01:09:56 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:27:22.427 01:09:56 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:27:22.427 01:09:56 -- dd/sparse.sh@110 -- # file1=file_zero1 00:27:22.427 01:09:56 -- dd/sparse.sh@111 -- # file2=file_zero2 00:27:22.427 01:09:56 -- dd/sparse.sh@112 -- # file3=file_zero3 00:27:22.427 01:09:56 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:27:22.427 01:09:56 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:27:22.427 01:09:56 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:27:22.427 01:09:56 -- dd/sparse.sh@118 -- # prepare 00:27:22.427 01:09:56 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:27:22.427 01:09:56 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:27:22.427 1+0 records in 00:27:22.427 1+0 records out 00:27:22.427 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0103443 s, 405 MB/s 00:27:22.427 01:09:56 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:27:22.427 1+0 records in 00:27:22.427 1+0 records out 00:27:22.427 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0113251 s, 370 MB/s 00:27:22.427 01:09:56 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:27:22.427 1+0 records in 00:27:22.427 1+0 records out 00:27:22.427 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0100892 s, 416 MB/s 00:27:22.427 01:09:56 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:27:22.427 01:09:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:22.427 01:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:22.427 01:09:56 -- common/autotest_common.sh@10 -- # set +x 00:27:22.427 ************************************ 00:27:22.427 START TEST dd_sparse_file_to_file 00:27:22.427 ************************************ 00:27:22.427 01:09:56 -- common/autotest_common.sh@1114 -- # file_to_file 00:27:22.427 01:09:56 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:27:22.427 01:09:56 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:27:22.427 01:09:56 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:22.427 01:09:56 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:27:22.427 01:09:56 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:27:22.427 01:09:56 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:27:22.427 01:09:56 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:27:22.427 01:09:56 -- dd/sparse.sh@41 -- # gen_conf 00:27:22.427 01:09:56 -- dd/common.sh@31 -- # xtrace_disable 00:27:22.427 01:09:56 -- common/autotest_common.sh@10 -- # set +x 00:27:22.427 { 00:27:22.427 "subsystems": [ 00:27:22.427 { 00:27:22.427 "subsystem": "bdev", 00:27:22.427 "config": [ 00:27:22.427 { 00:27:22.427 "params": { 00:27:22.427 "block_size": 4096, 00:27:22.427 "filename": "dd_sparse_aio_disk", 00:27:22.427 "name": "dd_aio" 00:27:22.427 }, 00:27:22.427 "method": "bdev_aio_create" 00:27:22.427 }, 00:27:22.427 { 00:27:22.427 "params": { 00:27:22.427 "lvs_name": "dd_lvstore", 00:27:22.427 "bdev_name": "dd_aio" 00:27:22.427 }, 00:27:22.427 "method": "bdev_lvol_create_lvstore" 00:27:22.427 }, 00:27:22.427 { 00:27:22.427 "method": "bdev_wait_for_examine" 00:27:22.427 } 00:27:22.427 ] 00:27:22.427 } 00:27:22.427 ] 00:27:22.427 } 00:27:22.427 [2024-11-18 01:09:56.777690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:22.428 [2024-11-18 01:09:56.777967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145623 ] 00:27:22.686 [2024-11-18 01:09:56.933410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.686 [2024-11-18 01:09:57.001798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.946  [2024-11-18T01:09:57.914Z] Copying: 12/36 [MB] (average 800 MBps) 00:27:23.515 00:27:23.515 01:09:57 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:27:23.515 01:09:57 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:27:23.515 01:09:57 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:27:23.515 01:09:57 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:27:23.515 01:09:57 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:23.515 01:09:57 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:27:23.515 01:09:57 -- dd/sparse.sh@52 -- # stat1_b=24576 00:27:23.515 01:09:57 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:27:23.515 01:09:57 -- dd/sparse.sh@53 -- # stat2_b=24576 00:27:23.515 01:09:57 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:23.515 ************************************ 00:27:23.515 END TEST dd_sparse_file_to_file 00:27:23.515 ************************************ 00:27:23.515 00:27:23.515 real 0m0.988s 00:27:23.515 user 0m0.530s 00:27:23.515 sys 0m0.333s 00:27:23.515 01:09:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:23.515 01:09:57 -- common/autotest_common.sh@10 -- # set +x 00:27:23.515 01:09:57 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:27:23.515 01:09:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:23.515 01:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:23.515 01:09:57 -- common/autotest_common.sh@10 -- # set +x 00:27:23.515 ************************************ 00:27:23.515 START TEST dd_sparse_file_to_bdev 00:27:23.515 ************************************ 00:27:23.515 01:09:57 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:27:23.515 01:09:57 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:23.515 01:09:57 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:27:23.515 01:09:57 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:27:23.515 01:09:57 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:27:23.515 01:09:57 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:27:23.515 01:09:57 -- dd/sparse.sh@73 -- # gen_conf 00:27:23.515 01:09:57 -- dd/common.sh@31 -- # xtrace_disable 00:27:23.515 01:09:57 -- common/autotest_common.sh@10 -- # set +x 00:27:23.515 { 00:27:23.515 "subsystems": [ 00:27:23.515 { 00:27:23.515 "subsystem": "bdev", 00:27:23.515 "config": [ 00:27:23.515 { 00:27:23.515 "params": { 00:27:23.515 "block_size": 4096, 00:27:23.515 "filename": "dd_sparse_aio_disk", 00:27:23.515 "name": "dd_aio" 00:27:23.515 }, 00:27:23.515 "method": "bdev_aio_create" 00:27:23.515 }, 00:27:23.515 { 00:27:23.515 "params": { 00:27:23.515 "lvs_name": "dd_lvstore", 00:27:23.515 "lvol_name": "dd_lvol", 00:27:23.515 "size": 37748736, 00:27:23.515 "thin_provision": true 00:27:23.515 }, 00:27:23.515 "method": "bdev_lvol_create" 00:27:23.515 }, 00:27:23.515 { 00:27:23.516 "method": "bdev_wait_for_examine" 00:27:23.516 } 00:27:23.516 ] 00:27:23.516 } 00:27:23.516 ] 00:27:23.516 } 00:27:23.516 [2024-11-18 01:09:57.829002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:23.516 [2024-11-18 01:09:57.829648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145676 ] 00:27:23.775 [2024-11-18 01:09:57.984963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.775 [2024-11-18 01:09:58.054065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.035 [2024-11-18 01:09:58.181319] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:27:24.035  [2024-11-18T01:09:58.434Z] Copying: 12/36 [MB] (average 480 MBps)[2024-11-18 01:09:58.229764] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:27:24.294 00:27:24.294 00:27:24.554 00:27:24.554 real 0m0.935s 00:27:24.554 user 0m0.530s 00:27:24.554 sys 0m0.292s 00:27:24.554 01:09:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:24.554 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:27:24.554 ************************************ 00:27:24.554 END TEST dd_sparse_file_to_bdev 00:27:24.554 ************************************ 00:27:24.554 01:09:58 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:27:24.554 01:09:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:24.554 01:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:24.554 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:27:24.554 ************************************ 00:27:24.554 START TEST dd_sparse_bdev_to_file 00:27:24.554 ************************************ 00:27:24.554 01:09:58 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:27:24.554 01:09:58 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:27:24.554 01:09:58 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:27:24.554 01:09:58 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:24.554 01:09:58 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:27:24.554 01:09:58 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:27:24.554 01:09:58 -- dd/sparse.sh@91 -- # gen_conf 00:27:24.554 01:09:58 -- dd/common.sh@31 -- # xtrace_disable 00:27:24.554 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:27:24.554 { 00:27:24.554 "subsystems": [ 00:27:24.554 { 00:27:24.554 "subsystem": "bdev", 00:27:24.554 "config": [ 00:27:24.554 { 00:27:24.554 "params": { 00:27:24.554 "block_size": 4096, 00:27:24.554 "filename": "dd_sparse_aio_disk", 00:27:24.554 "name": "dd_aio" 00:27:24.554 }, 00:27:24.554 "method": "bdev_aio_create" 00:27:24.554 }, 00:27:24.554 { 00:27:24.554 "method": "bdev_wait_for_examine" 00:27:24.554 } 00:27:24.554 ] 00:27:24.554 } 00:27:24.554 ] 00:27:24.554 } 00:27:24.554 [2024-11-18 01:09:58.828892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:24.554 [2024-11-18 01:09:58.829322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145725 ] 00:27:24.814 [2024-11-18 01:09:58.984677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.814 [2024-11-18 01:09:59.050862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.814  [2024-11-18T01:09:59.780Z] Copying: 12/36 [MB] (average 857 MBps) 00:27:25.381 00:27:25.381 01:09:59 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:25.381 01:09:59 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:25.381 01:09:59 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:25.381 01:09:59 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:25.381 01:09:59 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:25.381 01:09:59 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:25.381 01:09:59 -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:25.381 01:09:59 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:25.381 01:09:59 -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:25.381 01:09:59 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:25.381 00:27:25.381 real 0m0.960s 00:27:25.381 user 0m0.509s 00:27:25.381 sys 0m0.332s 00:27:25.381 01:09:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:25.381 01:09:59 -- common/autotest_common.sh@10 -- # set +x 00:27:25.381 ************************************ 00:27:25.381 END TEST dd_sparse_bdev_to_file 00:27:25.381 ************************************ 00:27:25.381 01:09:59 -- dd/sparse.sh@1 -- # cleanup 00:27:25.381 01:09:59 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:25.381 01:09:59 -- dd/sparse.sh@12 -- # rm file_zero1 00:27:25.641 01:09:59 -- dd/sparse.sh@13 -- # rm file_zero2 00:27:25.641 01:09:59 -- dd/sparse.sh@14 -- # rm file_zero3 00:27:25.641 ************************************ 00:27:25.641 END TEST spdk_dd_sparse 00:27:25.641 ************************************ 00:27:25.641 00:27:25.641 real 0m3.360s 00:27:25.641 user 0m1.810s 00:27:25.641 sys 0m1.206s 00:27:25.641 01:09:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:25.641 01:09:59 -- common/autotest_common.sh@10 -- # set +x 00:27:25.641 01:09:59 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:25.641 01:09:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:25.641 01:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:25.641 01:09:59 -- common/autotest_common.sh@10 -- # set +x 00:27:25.641 ************************************ 00:27:25.641 START TEST spdk_dd_negative 00:27:25.641 ************************************ 00:27:25.641 01:09:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:25.641 * Looking for test storage... 00:27:25.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:25.641 01:09:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:25.641 01:09:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:25.641 01:09:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:25.901 01:10:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:25.901 01:10:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:25.901 01:10:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:25.901 01:10:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:25.901 01:10:00 -- scripts/common.sh@335 -- # IFS=.-: 00:27:25.901 01:10:00 -- scripts/common.sh@335 -- # read -ra ver1 00:27:25.901 01:10:00 -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.901 01:10:00 -- scripts/common.sh@336 -- # read -ra ver2 00:27:25.901 01:10:00 -- scripts/common.sh@337 -- # local 'op=<' 00:27:25.901 01:10:00 -- scripts/common.sh@339 -- # ver1_l=2 00:27:25.901 01:10:00 -- scripts/common.sh@340 -- # ver2_l=1 00:27:25.901 01:10:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:25.901 01:10:00 -- scripts/common.sh@343 -- # case "$op" in 00:27:25.901 01:10:00 -- scripts/common.sh@344 -- # : 1 00:27:25.901 01:10:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:25.901 01:10:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.901 01:10:00 -- scripts/common.sh@364 -- # decimal 1 00:27:25.901 01:10:00 -- scripts/common.sh@352 -- # local d=1 00:27:25.901 01:10:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.901 01:10:00 -- scripts/common.sh@354 -- # echo 1 00:27:25.901 01:10:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:25.901 01:10:00 -- scripts/common.sh@365 -- # decimal 2 00:27:25.901 01:10:00 -- scripts/common.sh@352 -- # local d=2 00:27:25.901 01:10:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.901 01:10:00 -- scripts/common.sh@354 -- # echo 2 00:27:25.901 01:10:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:25.901 01:10:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:25.901 01:10:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:25.901 01:10:00 -- scripts/common.sh@367 -- # return 0 00:27:25.901 01:10:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.901 01:10:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:25.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.901 --rc genhtml_branch_coverage=1 00:27:25.901 --rc genhtml_function_coverage=1 00:27:25.901 --rc genhtml_legend=1 00:27:25.901 --rc geninfo_all_blocks=1 00:27:25.901 --rc geninfo_unexecuted_blocks=1 00:27:25.901 00:27:25.901 ' 00:27:25.901 01:10:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:25.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.901 --rc genhtml_branch_coverage=1 00:27:25.901 --rc genhtml_function_coverage=1 00:27:25.901 --rc genhtml_legend=1 00:27:25.902 --rc geninfo_all_blocks=1 00:27:25.902 --rc geninfo_unexecuted_blocks=1 00:27:25.902 00:27:25.902 ' 00:27:25.902 01:10:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:25.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.902 --rc genhtml_branch_coverage=1 00:27:25.902 --rc genhtml_function_coverage=1 00:27:25.902 --rc genhtml_legend=1 00:27:25.902 --rc geninfo_all_blocks=1 00:27:25.902 --rc geninfo_unexecuted_blocks=1 00:27:25.902 00:27:25.902 ' 00:27:25.902 01:10:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:25.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.902 --rc genhtml_branch_coverage=1 00:27:25.902 --rc genhtml_function_coverage=1 00:27:25.902 --rc genhtml_legend=1 00:27:25.902 --rc geninfo_all_blocks=1 00:27:25.902 --rc geninfo_unexecuted_blocks=1 00:27:25.902 00:27:25.902 ' 00:27:25.902 01:10:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:25.902 01:10:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.902 01:10:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.902 01:10:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.902 01:10:00 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:25.902 01:10:00 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:25.902 01:10:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:25.902 01:10:00 -- paths/export.sh@5 -- # export PATH 00:27:25.902 01:10:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:25.902 01:10:00 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:25.902 01:10:00 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:25.902 01:10:00 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:25.902 01:10:00 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:25.902 01:10:00 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:25.902 01:10:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:25.902 01:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:25.902 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:25.902 ************************************ 00:27:25.902 START TEST dd_invalid_arguments 00:27:25.902 ************************************ 00:27:25.902 01:10:00 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:27:25.902 01:10:00 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:25.902 01:10:00 -- common/autotest_common.sh@650 -- # local es=0 00:27:25.902 01:10:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:25.902 01:10:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:25.902 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:25.902 01:10:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:25.902 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:25.902 01:10:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:25.902 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:25.902 01:10:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:25.902 01:10:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:25.902 01:10:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:25.902 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:25.902 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:25.902 options: 00:27:25.902 -c, --config JSON config file (default none) 00:27:25.902 --json JSON config file (default none) 00:27:25.902 --json-ignore-init-errors 00:27:25.902 don't exit on invalid config entry 00:27:25.902 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:25.902 -g, --single-file-segments 00:27:25.902 force creating just one hugetlbfs file 00:27:25.902 -h, --help show this usage 00:27:25.902 -i, --shm-id shared memory ID (optional) 00:27:25.902 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:27:25.902 --lcores lcore to CPU mapping list. The list is in the format: 00:27:25.902 [<,lcores[@CPUs]>...] 00:27:25.902 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:25.902 Within the group, '-' is used for range separator, 00:27:25.902 ',' is used for single number separator. 00:27:25.902 '( )' can be omitted for single element group, 00:27:25.902 '@' can be omitted if cpus and lcores have the same value 00:27:25.902 -n, --mem-channels channel number of memory channels used for DPDK 00:27:25.902 -p, --main-core main (primary) core for DPDK 00:27:25.902 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:25.902 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:25.902 --disable-cpumask-locks Disable CPU core lock files. 00:27:25.902 --silence-noticelog disable notice level logging to stderr 00:27:25.902 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:25.902 -u, --no-pci disable PCI access 00:27:25.902 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:25.902 --max-delay maximum reactor delay (in microseconds) 00:27:25.902 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:25.902 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:25.902 -R, --huge-unlink unlink huge files after initialization 00:27:25.902 -v, --version print SPDK version 00:27:25.902 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:25.902 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:25.902 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:25.902 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:27:25.902 Tracepoints vary in size and can use more than one trace entry. 00:27:25.902 --rpcs-allowed comma-separated list of permitted RPCS 00:27:25.902 --env-context Opaque context for use of the env implementation 00:27:25.902 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:25.902 --no-huge run without using hugepages 00:27:25.902 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:27:25.902 -e, --tpoint-group [:] 00:27:25.902 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:27:25.902 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:27:25.902 Groups and [2024-11-18 01:10:00.184849] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:27:25.902 masks can be combined (e.g. thread,bdev:0x1). 00:27:25.902 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:27:25.902 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:27:25.903 [--------- DD Options ---------] 00:27:25.903 --if Input file. Must specify either --if or --ib. 00:27:25.903 --ib Input bdev. Must specifier either --if or --ib 00:27:25.903 --of Output file. Must specify either --of or --ob. 00:27:25.903 --ob Output bdev. Must specify either --of or --ob. 00:27:25.903 --iflag Input file flags. 00:27:25.903 --oflag Output file flags. 00:27:25.903 --bs I/O unit size (default: 4096) 00:27:25.903 --qd Queue depth (default: 2) 00:27:25.903 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:25.903 --skip Skip this many I/O units at start of input. (default: 0) 00:27:25.903 --seek Skip this many I/O units at start of output. (default: 0) 00:27:25.903 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:25.903 --sparse Enable hole skipping in input target 00:27:25.903 Available iflag and oflag values: 00:27:25.903 append - append mode 00:27:25.903 direct - use direct I/O for data 00:27:25.903 directory - fail unless a directory 00:27:25.903 dsync - use synchronized I/O for data 00:27:25.903 noatime - do not update access time 00:27:25.903 noctty - do not assign controlling terminal from file 00:27:25.903 nofollow - do not follow symlinks 00:27:25.903 nonblock - use non-blocking I/O 00:27:25.903 sync - use synchronized I/O for data and metadata 00:27:25.903 01:10:00 -- common/autotest_common.sh@653 -- # es=2 00:27:25.903 01:10:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:25.903 01:10:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:25.903 01:10:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:25.903 00:27:25.903 real 0m0.127s 00:27:25.903 user 0m0.038s 00:27:25.903 sys 0m0.088s 00:27:25.903 01:10:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:25.903 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:25.903 ************************************ 00:27:25.903 END TEST dd_invalid_arguments 00:27:25.903 ************************************ 00:27:25.903 01:10:00 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:25.903 01:10:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:25.903 01:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:25.903 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.163 ************************************ 00:27:26.163 START TEST dd_double_input 00:27:26.163 ************************************ 00:27:26.163 01:10:00 -- common/autotest_common.sh@1114 -- # double_input 00:27:26.163 01:10:00 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:26.163 01:10:00 -- common/autotest_common.sh@650 -- # local es=0 00:27:26.163 01:10:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:26.163 01:10:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.163 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.163 01:10:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.163 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.163 01:10:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.163 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.163 01:10:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.163 01:10:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:26.164 01:10:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:26.164 [2024-11-18 01:10:00.379894] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:26.164 01:10:00 -- common/autotest_common.sh@653 -- # es=22 00:27:26.164 01:10:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:26.164 01:10:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:26.164 01:10:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:26.164 00:27:26.164 real 0m0.130s 00:27:26.164 user 0m0.062s 00:27:26.164 sys 0m0.065s 00:27:26.164 01:10:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:26.164 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 ************************************ 00:27:26.164 END TEST dd_double_input 00:27:26.164 ************************************ 00:27:26.164 01:10:00 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:26.164 01:10:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:26.164 01:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.164 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 ************************************ 00:27:26.164 START TEST dd_double_output 00:27:26.164 ************************************ 00:27:26.164 01:10:00 -- common/autotest_common.sh@1114 -- # double_output 00:27:26.164 01:10:00 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:26.164 01:10:00 -- common/autotest_common.sh@650 -- # local es=0 00:27:26.164 01:10:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:26.164 01:10:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.164 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.164 01:10:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.164 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.164 01:10:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.164 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.164 01:10:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.164 01:10:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:26.164 01:10:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:26.423 [2024-11-18 01:10:00.572564] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:26.423 01:10:00 -- common/autotest_common.sh@653 -- # es=22 00:27:26.423 01:10:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:26.423 01:10:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:26.423 01:10:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:26.423 00:27:26.423 real 0m0.123s 00:27:26.423 user 0m0.034s 00:27:26.423 sys 0m0.087s 00:27:26.423 01:10:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:26.423 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.423 ************************************ 00:27:26.423 END TEST dd_double_output 00:27:26.423 ************************************ 00:27:26.423 01:10:00 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:26.423 01:10:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:26.423 01:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.423 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.423 ************************************ 00:27:26.423 START TEST dd_no_input 00:27:26.423 ************************************ 00:27:26.423 01:10:00 -- common/autotest_common.sh@1114 -- # no_input 00:27:26.423 01:10:00 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:26.423 01:10:00 -- common/autotest_common.sh@650 -- # local es=0 00:27:26.423 01:10:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:26.423 01:10:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.423 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.423 01:10:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.423 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.423 01:10:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.423 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.423 01:10:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.423 01:10:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:26.423 01:10:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:26.423 [2024-11-18 01:10:00.757902] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:27:26.423 01:10:00 -- common/autotest_common.sh@653 -- # es=22 00:27:26.423 01:10:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:26.423 01:10:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:26.423 01:10:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:26.423 00:27:26.423 real 0m0.121s 00:27:26.423 user 0m0.051s 00:27:26.423 sys 0m0.071s 00:27:26.423 01:10:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:26.423 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.423 ************************************ 00:27:26.423 END TEST dd_no_input 00:27:26.423 ************************************ 00:27:26.683 01:10:00 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:26.683 01:10:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:26.683 01:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.683 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.683 ************************************ 00:27:26.683 START TEST dd_no_output 00:27:26.683 ************************************ 00:27:26.683 01:10:00 -- common/autotest_common.sh@1114 -- # no_output 00:27:26.683 01:10:00 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:26.683 01:10:00 -- common/autotest_common.sh@650 -- # local es=0 00:27:26.683 01:10:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:26.683 01:10:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.683 01:10:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.683 01:10:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.683 01:10:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:26.683 01:10:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:26.683 [2024-11-18 01:10:00.945435] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:27:26.683 01:10:00 -- common/autotest_common.sh@653 -- # es=22 00:27:26.683 01:10:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:26.683 01:10:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:26.683 01:10:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:26.683 00:27:26.683 real 0m0.117s 00:27:26.683 user 0m0.054s 00:27:26.683 sys 0m0.064s 00:27:26.683 01:10:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:26.683 01:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.683 ************************************ 00:27:26.683 END TEST dd_no_output 00:27:26.683 ************************************ 00:27:26.683 01:10:01 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:26.683 01:10:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:26.683 01:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.683 01:10:01 -- common/autotest_common.sh@10 -- # set +x 00:27:26.683 ************************************ 00:27:26.683 START TEST dd_wrong_blocksize 00:27:26.683 ************************************ 00:27:26.683 01:10:01 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:27:26.683 01:10:01 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:26.683 01:10:01 -- common/autotest_common.sh@650 -- # local es=0 00:27:26.683 01:10:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:26.683 01:10:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.683 01:10:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.683 01:10:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.683 01:10:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.683 01:10:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:26.684 01:10:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:26.943 [2024-11-18 01:10:01.114386] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:27:26.943 01:10:01 -- common/autotest_common.sh@653 -- # es=22 00:27:26.943 01:10:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:26.943 01:10:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:26.943 01:10:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:26.943 00:27:26.943 real 0m0.105s 00:27:26.943 user 0m0.036s 00:27:26.943 sys 0m0.070s 00:27:26.943 01:10:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:26.943 01:10:01 -- common/autotest_common.sh@10 -- # set +x 00:27:26.943 ************************************ 00:27:26.943 END TEST dd_wrong_blocksize 00:27:26.943 ************************************ 00:27:26.943 01:10:01 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:26.943 01:10:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:26.943 01:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.943 01:10:01 -- common/autotest_common.sh@10 -- # set +x 00:27:26.943 ************************************ 00:27:26.943 START TEST dd_smaller_blocksize 00:27:26.943 ************************************ 00:27:26.943 01:10:01 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:27:26.943 01:10:01 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:26.943 01:10:01 -- common/autotest_common.sh@650 -- # local es=0 00:27:26.943 01:10:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:26.943 01:10:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.943 01:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.943 01:10:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.943 01:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.943 01:10:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.943 01:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.943 01:10:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.943 01:10:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:26.943 01:10:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:26.943 [2024-11-18 01:10:01.286825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:26.943 [2024-11-18 01:10:01.287018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145990 ] 00:27:27.202 [2024-11-18 01:10:01.430935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.202 [2024-11-18 01:10:01.502198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.461 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:27.461 [2024-11-18 01:10:01.712094] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:27.461 [2024-11-18 01:10:01.712238] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:27.720 [2024-11-18 01:10:01.897703] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:27.720 01:10:02 -- common/autotest_common.sh@653 -- # es=244 00:27:27.720 01:10:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.720 01:10:02 -- common/autotest_common.sh@662 -- # es=116 00:27:27.720 01:10:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:27.720 01:10:02 -- common/autotest_common.sh@670 -- # es=1 00:27:27.720 01:10:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.720 00:27:27.720 real 0m0.870s 00:27:27.720 user 0m0.460s 00:27:27.720 sys 0m0.311s 00:27:27.720 01:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:27.720 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:27.720 ************************************ 00:27:27.720 END TEST dd_smaller_blocksize 00:27:27.720 ************************************ 00:27:27.980 01:10:02 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:27.980 01:10:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:27.980 01:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:27.980 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:27.980 ************************************ 00:27:27.980 START TEST dd_invalid_count 00:27:27.980 ************************************ 00:27:27.980 01:10:02 -- common/autotest_common.sh@1114 -- # invalid_count 00:27:27.980 01:10:02 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:27.980 01:10:02 -- common/autotest_common.sh@650 -- # local es=0 00:27:27.980 01:10:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:27.980 01:10:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.980 01:10:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.980 01:10:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:27.980 01:10:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:27.980 [2024-11-18 01:10:02.238490] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:27:27.980 01:10:02 -- common/autotest_common.sh@653 -- # es=22 00:27:27.980 01:10:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.980 01:10:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.980 01:10:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.980 00:27:27.980 real 0m0.128s 00:27:27.980 user 0m0.057s 00:27:27.980 sys 0m0.072s 00:27:27.980 01:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:27.980 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:27.980 ************************************ 00:27:27.980 END TEST dd_invalid_count 00:27:27.980 ************************************ 00:27:27.980 01:10:02 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:27.980 01:10:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:27.980 01:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:27.980 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:27.980 ************************************ 00:27:27.980 START TEST dd_invalid_oflag 00:27:27.980 ************************************ 00:27:27.980 01:10:02 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:27:27.980 01:10:02 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:27.980 01:10:02 -- common/autotest_common.sh@650 -- # local es=0 00:27:27.980 01:10:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:27.980 01:10:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.980 01:10:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.980 01:10:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.980 01:10:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:27.980 01:10:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:28.240 [2024-11-18 01:10:02.433328] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:27:28.240 01:10:02 -- common/autotest_common.sh@653 -- # es=22 00:27:28.240 01:10:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:28.240 01:10:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:28.240 01:10:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:28.240 00:27:28.240 real 0m0.121s 00:27:28.240 user 0m0.054s 00:27:28.240 sys 0m0.068s 00:27:28.240 01:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:28.240 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:28.240 ************************************ 00:27:28.240 END TEST dd_invalid_oflag 00:27:28.240 ************************************ 00:27:28.240 01:10:02 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:28.240 01:10:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:28.240 01:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:28.240 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:28.240 ************************************ 00:27:28.240 START TEST dd_invalid_iflag 00:27:28.240 ************************************ 00:27:28.240 01:10:02 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:27:28.240 01:10:02 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:28.240 01:10:02 -- common/autotest_common.sh@650 -- # local es=0 00:27:28.240 01:10:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:28.240 01:10:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.240 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.240 01:10:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.240 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.240 01:10:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.240 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.240 01:10:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.240 01:10:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:28.240 01:10:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:28.240 [2024-11-18 01:10:02.604050] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:27:28.500 01:10:02 -- common/autotest_common.sh@653 -- # es=22 00:27:28.500 01:10:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:28.500 01:10:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:28.500 01:10:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:28.500 00:27:28.500 real 0m0.118s 00:27:28.500 user 0m0.060s 00:27:28.500 sys 0m0.059s 00:27:28.500 01:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:28.500 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:28.500 ************************************ 00:27:28.500 END TEST dd_invalid_iflag 00:27:28.500 ************************************ 00:27:28.500 01:10:02 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:28.500 01:10:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:28.500 01:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:28.500 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:28.500 ************************************ 00:27:28.500 START TEST dd_unknown_flag 00:27:28.500 ************************************ 00:27:28.500 01:10:02 -- common/autotest_common.sh@1114 -- # unknown_flag 00:27:28.500 01:10:02 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:28.500 01:10:02 -- common/autotest_common.sh@650 -- # local es=0 00:27:28.500 01:10:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:28.500 01:10:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.500 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.500 01:10:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.500 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.500 01:10:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.500 01:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.500 01:10:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.500 01:10:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:28.500 01:10:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:28.500 [2024-11-18 01:10:02.778812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:28.500 [2024-11-18 01:10:02.779015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146114 ] 00:27:28.760 [2024-11-18 01:10:02.920416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.760 [2024-11-18 01:10:02.987491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.760 [2024-11-18 01:10:03.101964] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:27:28.760 [2024-11-18 01:10:03.102092] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:28.760 [2024-11-18 01:10:03.102243] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:28.760 [2024-11-18 01:10:03.102308] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:29.019 [2024-11-18 01:10:03.282538] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:29.279 01:10:03 -- common/autotest_common.sh@653 -- # es=236 00:27:29.279 01:10:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:29.279 01:10:03 -- common/autotest_common.sh@662 -- # es=108 00:27:29.279 01:10:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:29.279 01:10:03 -- common/autotest_common.sh@670 -- # es=1 00:27:29.279 01:10:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:29.279 00:27:29.279 real 0m0.764s 00:27:29.279 user 0m0.399s 00:27:29.279 sys 0m0.266s 00:27:29.279 01:10:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:29.279 ************************************ 00:27:29.279 END TEST dd_unknown_flag 00:27:29.279 01:10:03 -- common/autotest_common.sh@10 -- # set +x 00:27:29.279 ************************************ 00:27:29.279 01:10:03 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:29.279 01:10:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:29.279 01:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:29.279 01:10:03 -- common/autotest_common.sh@10 -- # set +x 00:27:29.279 ************************************ 00:27:29.279 START TEST dd_invalid_json 00:27:29.279 ************************************ 00:27:29.279 01:10:03 -- common/autotest_common.sh@1114 -- # invalid_json 00:27:29.279 01:10:03 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:29.279 01:10:03 -- common/autotest_common.sh@650 -- # local es=0 00:27:29.279 01:10:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:29.279 01:10:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:29.279 01:10:03 -- dd/negative_dd.sh@95 -- # : 00:27:29.279 01:10:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.279 01:10:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:29.279 01:10:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.279 01:10:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:29.279 01:10:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.279 01:10:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:29.279 01:10:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:29.279 01:10:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:29.279 [2024-11-18 01:10:03.625772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:29.279 [2024-11-18 01:10:03.626032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146149 ] 00:27:29.538 [2024-11-18 01:10:03.780520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.539 [2024-11-18 01:10:03.851283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.539 [2024-11-18 01:10:03.851516] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:27:29.539 [2024-11-18 01:10:03.851562] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:29.539 [2024-11-18 01:10:03.851652] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:29.798 01:10:04 -- common/autotest_common.sh@653 -- # es=234 00:27:29.798 01:10:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:29.798 01:10:04 -- common/autotest_common.sh@662 -- # es=106 00:27:29.798 01:10:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:29.798 01:10:04 -- common/autotest_common.sh@670 -- # es=1 00:27:29.798 01:10:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:29.798 00:27:29.798 real 0m0.494s 00:27:29.798 user 0m0.225s 00:27:29.798 sys 0m0.171s 00:27:29.798 01:10:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:29.798 ************************************ 00:27:29.798 END TEST dd_invalid_json 00:27:29.798 ************************************ 00:27:29.798 01:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:29.798 ************************************ 00:27:29.798 END TEST spdk_dd_negative 00:27:29.798 ************************************ 00:27:29.798 00:27:29.798 real 0m4.221s 00:27:29.798 user 0m2.046s 00:27:29.798 sys 0m1.880s 00:27:29.798 01:10:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:29.798 01:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:29.798 00:27:29.798 real 1m23.437s 00:27:29.798 user 0m46.726s 00:27:29.798 sys 0m26.389s 00:27:29.798 01:10:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:29.798 01:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:29.798 ************************************ 00:27:29.798 END TEST spdk_dd 00:27:29.798 ************************************ 00:27:29.798 01:10:04 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:27:29.798 01:10:04 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:29.798 01:10:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:29.798 01:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:29.798 01:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:30.057 ************************************ 00:27:30.057 START TEST blockdev_nvme 00:27:30.057 ************************************ 00:27:30.057 01:10:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:30.057 * Looking for test storage... 00:27:30.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:30.057 01:10:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:30.057 01:10:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:30.057 01:10:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:30.057 01:10:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:30.057 01:10:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:30.057 01:10:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:30.057 01:10:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:30.057 01:10:04 -- scripts/common.sh@335 -- # IFS=.-: 00:27:30.057 01:10:04 -- scripts/common.sh@335 -- # read -ra ver1 00:27:30.057 01:10:04 -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.057 01:10:04 -- scripts/common.sh@336 -- # read -ra ver2 00:27:30.057 01:10:04 -- scripts/common.sh@337 -- # local 'op=<' 00:27:30.057 01:10:04 -- scripts/common.sh@339 -- # ver1_l=2 00:27:30.057 01:10:04 -- scripts/common.sh@340 -- # ver2_l=1 00:27:30.057 01:10:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:30.057 01:10:04 -- scripts/common.sh@343 -- # case "$op" in 00:27:30.057 01:10:04 -- scripts/common.sh@344 -- # : 1 00:27:30.057 01:10:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:30.057 01:10:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.057 01:10:04 -- scripts/common.sh@364 -- # decimal 1 00:27:30.057 01:10:04 -- scripts/common.sh@352 -- # local d=1 00:27:30.057 01:10:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.057 01:10:04 -- scripts/common.sh@354 -- # echo 1 00:27:30.057 01:10:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:30.057 01:10:04 -- scripts/common.sh@365 -- # decimal 2 00:27:30.057 01:10:04 -- scripts/common.sh@352 -- # local d=2 00:27:30.057 01:10:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.057 01:10:04 -- scripts/common.sh@354 -- # echo 2 00:27:30.057 01:10:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:30.057 01:10:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:30.057 01:10:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:30.057 01:10:04 -- scripts/common.sh@367 -- # return 0 00:27:30.057 01:10:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.057 01:10:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.057 --rc genhtml_branch_coverage=1 00:27:30.057 --rc genhtml_function_coverage=1 00:27:30.057 --rc genhtml_legend=1 00:27:30.057 --rc geninfo_all_blocks=1 00:27:30.057 --rc geninfo_unexecuted_blocks=1 00:27:30.057 00:27:30.057 ' 00:27:30.057 01:10:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.057 --rc genhtml_branch_coverage=1 00:27:30.057 --rc genhtml_function_coverage=1 00:27:30.057 --rc genhtml_legend=1 00:27:30.057 --rc geninfo_all_blocks=1 00:27:30.057 --rc geninfo_unexecuted_blocks=1 00:27:30.057 00:27:30.057 ' 00:27:30.057 01:10:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.057 --rc genhtml_branch_coverage=1 00:27:30.057 --rc genhtml_function_coverage=1 00:27:30.057 --rc genhtml_legend=1 00:27:30.057 --rc geninfo_all_blocks=1 00:27:30.057 --rc geninfo_unexecuted_blocks=1 00:27:30.057 00:27:30.057 ' 00:27:30.057 01:10:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.058 --rc genhtml_branch_coverage=1 00:27:30.058 --rc genhtml_function_coverage=1 00:27:30.058 --rc genhtml_legend=1 00:27:30.058 --rc geninfo_all_blocks=1 00:27:30.058 --rc geninfo_unexecuted_blocks=1 00:27:30.058 00:27:30.058 ' 00:27:30.058 01:10:04 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:30.058 01:10:04 -- bdev/nbd_common.sh@6 -- # set -e 00:27:30.058 01:10:04 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:30.058 01:10:04 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:30.058 01:10:04 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:30.058 01:10:04 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:30.058 01:10:04 -- bdev/blockdev.sh@18 -- # : 00:27:30.058 01:10:04 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:30.058 01:10:04 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:30.058 01:10:04 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:30.058 01:10:04 -- bdev/blockdev.sh@672 -- # uname -s 00:27:30.058 01:10:04 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:30.058 01:10:04 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:30.058 01:10:04 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:27:30.058 01:10:04 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:30.058 01:10:04 -- bdev/blockdev.sh@682 -- # dek= 00:27:30.058 01:10:04 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:30.058 01:10:04 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:30.058 01:10:04 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:30.058 01:10:04 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:27:30.058 01:10:04 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:27:30.058 01:10:04 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:30.058 01:10:04 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=146242 00:27:30.058 01:10:04 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:30.058 01:10:04 -- bdev/blockdev.sh@47 -- # waitforlisten 146242 00:27:30.058 01:10:04 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:30.058 01:10:04 -- common/autotest_common.sh@829 -- # '[' -z 146242 ']' 00:27:30.058 01:10:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.058 01:10:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.058 01:10:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.058 01:10:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.058 01:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:30.317 [2024-11-18 01:10:04.488784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:30.317 [2024-11-18 01:10:04.488998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146242 ] 00:27:30.317 [2024-11-18 01:10:04.632559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.317 [2024-11-18 01:10:04.703137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:30.317 [2024-11-18 01:10:04.703362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.254 01:10:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.254 01:10:05 -- common/autotest_common.sh@862 -- # return 0 00:27:31.254 01:10:05 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:31.254 01:10:05 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:27:31.254 01:10:05 -- bdev/blockdev.sh@79 -- # local json 00:27:31.254 01:10:05 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:31.254 01:10:05 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:31.254 01:10:05 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:31.254 01:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.254 01:10:05 -- common/autotest_common.sh@10 -- # set +x 00:27:31.254 01:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.254 01:10:05 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:31.254 01:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.254 01:10:05 -- common/autotest_common.sh@10 -- # set +x 00:27:31.254 01:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.254 01:10:05 -- bdev/blockdev.sh@738 -- # cat 00:27:31.254 01:10:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:31.254 01:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.254 01:10:05 -- common/autotest_common.sh@10 -- # set +x 00:27:31.254 01:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.254 01:10:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:31.254 01:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.254 01:10:05 -- common/autotest_common.sh@10 -- # set +x 00:27:31.254 01:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.254 01:10:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:31.254 01:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.254 01:10:05 -- common/autotest_common.sh@10 -- # set +x 00:27:31.254 01:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.254 01:10:05 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:31.254 01:10:05 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:31.254 01:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.254 01:10:05 -- common/autotest_common.sh@10 -- # set +x 00:27:31.254 01:10:05 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:31.254 01:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.513 01:10:05 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:31.513 01:10:05 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:31.514 01:10:05 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2894e328-03b5-48c7-a55f-0ce5a70f0a4a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2894e328-03b5-48c7-a55f-0ce5a70f0a4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:27:31.514 01:10:05 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:31.514 01:10:05 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:27:31.514 01:10:05 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:31.514 01:10:05 -- bdev/blockdev.sh@752 -- # killprocess 146242 00:27:31.514 01:10:05 -- common/autotest_common.sh@936 -- # '[' -z 146242 ']' 00:27:31.514 01:10:05 -- common/autotest_common.sh@940 -- # kill -0 146242 00:27:31.514 01:10:05 -- common/autotest_common.sh@941 -- # uname 00:27:31.514 01:10:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:31.514 01:10:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146242 00:27:31.514 01:10:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:31.514 killing process with pid 146242 00:27:31.514 01:10:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:31.514 01:10:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146242' 00:27:31.514 01:10:05 -- common/autotest_common.sh@955 -- # kill 146242 00:27:31.514 01:10:05 -- common/autotest_common.sh@960 -- # wait 146242 00:27:32.082 01:10:06 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:32.082 01:10:06 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:32.082 01:10:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:32.082 01:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:32.082 01:10:06 -- common/autotest_common.sh@10 -- # set +x 00:27:32.082 ************************************ 00:27:32.082 START TEST bdev_hello_world 00:27:32.082 ************************************ 00:27:32.082 01:10:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:32.082 [2024-11-18 01:10:06.474928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:32.082 [2024-11-18 01:10:06.475141] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146315 ] 00:27:32.347 [2024-11-18 01:10:06.620194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.347 [2024-11-18 01:10:06.688594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.614 [2024-11-18 01:10:06.930757] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:32.614 [2024-11-18 01:10:06.930860] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:27:32.614 [2024-11-18 01:10:06.930919] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:32.614 [2024-11-18 01:10:06.933581] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:32.614 [2024-11-18 01:10:06.934000] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:32.614 [2024-11-18 01:10:06.934055] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:32.614 [2024-11-18 01:10:06.934343] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:32.614 00:27:32.614 [2024-11-18 01:10:06.934409] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:33.193 00:27:33.193 real 0m0.907s 00:27:33.193 user 0m0.537s 00:27:33.193 sys 0m0.270s 00:27:33.193 01:10:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:33.193 01:10:07 -- common/autotest_common.sh@10 -- # set +x 00:27:33.193 ************************************ 00:27:33.193 END TEST bdev_hello_world 00:27:33.193 ************************************ 00:27:33.193 01:10:07 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:33.193 01:10:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:33.193 01:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:33.193 01:10:07 -- common/autotest_common.sh@10 -- # set +x 00:27:33.193 ************************************ 00:27:33.193 START TEST bdev_bounds 00:27:33.193 ************************************ 00:27:33.193 01:10:07 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:27:33.193 01:10:07 -- bdev/blockdev.sh@288 -- # bdevio_pid=146347 00:27:33.193 01:10:07 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:33.193 Process bdevio pid: 146347 00:27:33.193 01:10:07 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 146347' 00:27:33.193 01:10:07 -- bdev/blockdev.sh@291 -- # waitforlisten 146347 00:27:33.193 01:10:07 -- common/autotest_common.sh@829 -- # '[' -z 146347 ']' 00:27:33.193 01:10:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.193 01:10:07 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:33.193 01:10:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.193 01:10:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.193 01:10:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.193 01:10:07 -- common/autotest_common.sh@10 -- # set +x 00:27:33.193 [2024-11-18 01:10:07.468334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:33.193 [2024-11-18 01:10:07.468618] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146347 ] 00:27:33.453 [2024-11-18 01:10:07.640753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:33.453 [2024-11-18 01:10:07.714563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.453 [2024-11-18 01:10:07.714604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.453 [2024-11-18 01:10:07.714600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.022 01:10:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.022 01:10:08 -- common/autotest_common.sh@862 -- # return 0 00:27:34.022 01:10:08 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:34.282 I/O targets: 00:27:34.282 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:34.282 00:27:34.282 00:27:34.282 CUnit - A unit testing framework for C - Version 2.1-3 00:27:34.282 http://cunit.sourceforge.net/ 00:27:34.282 00:27:34.282 00:27:34.282 Suite: bdevio tests on: Nvme0n1 00:27:34.282 Test: blockdev write read block ...passed 00:27:34.282 Test: blockdev write zeroes read block ...passed 00:27:34.282 Test: blockdev write zeroes read no split ...passed 00:27:34.282 Test: blockdev write zeroes read split ...passed 00:27:34.282 Test: blockdev write zeroes read split partial ...passed 00:27:34.282 Test: blockdev reset ...[2024-11-18 01:10:08.524978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:34.282 [2024-11-18 01:10:08.527275] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:34.282 passed 00:27:34.282 Test: blockdev write read 8 blocks ...passed 00:27:34.282 Test: blockdev write read size > 128k ...passed 00:27:34.282 Test: blockdev write read invalid size ...passed 00:27:34.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:34.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:34.282 Test: blockdev write read max offset ...passed 00:27:34.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:34.282 Test: blockdev writev readv 8 blocks ...passed 00:27:34.282 Test: blockdev writev readv 30 x 1block ...passed 00:27:34.282 Test: blockdev writev readv block ...passed 00:27:34.282 Test: blockdev writev readv size > 128k ...passed 00:27:34.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:34.282 Test: blockdev comparev and writev ...[2024-11-18 01:10:08.533748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x7a40d000 len:0x1000 00:27:34.282 [2024-11-18 01:10:08.533845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:34.282 passed 00:27:34.282 Test: blockdev nvme passthru rw ...passed 00:27:34.282 Test: blockdev nvme passthru vendor specific ...[2024-11-18 01:10:08.534648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:27:34.282 [2024-11-18 01:10:08.534706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:27:34.282 passed 00:27:34.282 Test: blockdev nvme admin passthru ...passed 00:27:34.282 Test: blockdev copy ...passed 00:27:34.282 00:27:34.282 Run Summary: Type Total Ran Passed Failed Inactive 00:27:34.282 suites 1 1 n/a 0 0 00:27:34.282 tests 23 23 23 0 0 00:27:34.282 asserts 152 152 152 0 n/a 00:27:34.282 00:27:34.282 Elapsed time = 0.061 seconds 00:27:34.282 0 00:27:34.282 01:10:08 -- bdev/blockdev.sh@293 -- # killprocess 146347 00:27:34.282 01:10:08 -- common/autotest_common.sh@936 -- # '[' -z 146347 ']' 00:27:34.282 01:10:08 -- common/autotest_common.sh@940 -- # kill -0 146347 00:27:34.282 01:10:08 -- common/autotest_common.sh@941 -- # uname 00:27:34.282 01:10:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:34.282 01:10:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146347 00:27:34.282 01:10:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:34.282 01:10:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:34.282 01:10:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146347' 00:27:34.282 killing process with pid 146347 00:27:34.282 01:10:08 -- common/autotest_common.sh@955 -- # kill 146347 00:27:34.282 01:10:08 -- common/autotest_common.sh@960 -- # wait 146347 00:27:34.852 01:10:08 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:34.852 00:27:34.852 real 0m1.548s 00:27:34.852 user 0m3.717s 00:27:34.852 sys 0m0.393s 00:27:34.852 01:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:34.852 01:10:08 -- common/autotest_common.sh@10 -- # set +x 00:27:34.852 ************************************ 00:27:34.852 END TEST bdev_bounds 00:27:34.852 ************************************ 00:27:34.852 01:10:08 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:34.852 01:10:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:34.852 01:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:34.852 01:10:08 -- common/autotest_common.sh@10 -- # set +x 00:27:34.852 ************************************ 00:27:34.852 START TEST bdev_nbd 00:27:34.852 ************************************ 00:27:34.852 01:10:09 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:34.852 01:10:09 -- bdev/blockdev.sh@298 -- # uname -s 00:27:34.852 01:10:09 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:34.852 01:10:09 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.852 01:10:09 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:34.852 01:10:09 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:27:34.852 01:10:09 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:34.852 01:10:09 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:27:34.852 01:10:09 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:34.852 01:10:09 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:34.852 01:10:09 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:34.852 01:10:09 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:27:34.852 01:10:09 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:27:34.852 01:10:09 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:34.852 01:10:09 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:27:34.852 01:10:09 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:34.852 01:10:09 -- bdev/blockdev.sh@316 -- # nbd_pid=146413 00:27:34.852 01:10:09 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:34.852 01:10:09 -- bdev/blockdev.sh@318 -- # waitforlisten 146413 /var/tmp/spdk-nbd.sock 00:27:34.852 01:10:09 -- common/autotest_common.sh@829 -- # '[' -z 146413 ']' 00:27:34.852 01:10:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:34.852 01:10:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.852 01:10:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:34.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:34.852 01:10:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.852 01:10:09 -- common/autotest_common.sh@10 -- # set +x 00:27:34.852 01:10:09 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:34.852 [2024-11-18 01:10:09.088555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:34.852 [2024-11-18 01:10:09.089003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.852 [2024-11-18 01:10:09.245061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.116 [2024-11-18 01:10:09.313028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.684 01:10:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.684 01:10:09 -- common/autotest_common.sh@862 -- # return 0 00:27:35.684 01:10:09 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@24 -- # local i 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:35.684 01:10:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:27:35.943 01:10:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:35.943 01:10:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:35.943 01:10:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:35.943 01:10:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:35.943 01:10:10 -- common/autotest_common.sh@867 -- # local i 00:27:35.943 01:10:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:35.943 01:10:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:35.943 01:10:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:35.943 01:10:10 -- common/autotest_common.sh@871 -- # break 00:27:35.943 01:10:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:35.943 01:10:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:35.943 01:10:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.943 1+0 records in 00:27:35.943 1+0 records out 00:27:35.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407582 s, 10.0 MB/s 00:27:35.943 01:10:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.943 01:10:10 -- common/autotest_common.sh@884 -- # size=4096 00:27:35.943 01:10:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.943 01:10:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:35.943 01:10:10 -- common/autotest_common.sh@887 -- # return 0 00:27:35.943 01:10:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:35.943 01:10:10 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:35.943 01:10:10 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:36.202 { 00:27:36.202 "nbd_device": "/dev/nbd0", 00:27:36.202 "bdev_name": "Nvme0n1" 00:27:36.202 } 00:27:36.202 ]' 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:36.202 { 00:27:36.202 "nbd_device": "/dev/nbd0", 00:27:36.202 "bdev_name": "Nvme0n1" 00:27:36.202 } 00:27:36.202 ]' 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@51 -- # local i 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:36.202 01:10:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@41 -- # break 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@45 -- # return 0 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:36.462 01:10:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@65 -- # true 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@65 -- # count=0 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@122 -- # count=0 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@127 -- # return 0 00:27:36.721 01:10:10 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@12 -- # local i 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.721 01:10:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:27:36.721 /dev/nbd0 00:27:36.721 01:10:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:36.721 01:10:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:36.721 01:10:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:36.721 01:10:11 -- common/autotest_common.sh@867 -- # local i 00:27:36.721 01:10:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:36.721 01:10:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:36.721 01:10:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:36.980 01:10:11 -- common/autotest_common.sh@871 -- # break 00:27:36.980 01:10:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:36.980 01:10:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:36.980 01:10:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.980 1+0 records in 00:27:36.980 1+0 records out 00:27:36.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334591 s, 12.2 MB/s 00:27:36.980 01:10:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.980 01:10:11 -- common/autotest_common.sh@884 -- # size=4096 00:27:36.980 01:10:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.980 01:10:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:36.980 01:10:11 -- common/autotest_common.sh@887 -- # return 0 00:27:36.980 01:10:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.980 01:10:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.980 01:10:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:36.980 01:10:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:36.980 01:10:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:37.239 { 00:27:37.239 "nbd_device": "/dev/nbd0", 00:27:37.239 "bdev_name": "Nvme0n1" 00:27:37.239 } 00:27:37.239 ]' 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:37.239 { 00:27:37.239 "nbd_device": "/dev/nbd0", 00:27:37.239 "bdev_name": "Nvme0n1" 00:27:37.239 } 00:27:37.239 ]' 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@65 -- # count=1 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@66 -- # echo 1 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@95 -- # count=1 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:37.239 256+0 records in 00:27:37.239 256+0 records out 00:27:37.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453977 s, 231 MB/s 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:37.239 256+0 records in 00:27:37.239 256+0 records out 00:27:37.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0562549 s, 18.6 MB/s 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:37.239 01:10:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@51 -- # local i 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:37.240 01:10:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@41 -- # break 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@45 -- # return 0 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:37.499 01:10:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:37.758 01:10:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@65 -- # true 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@65 -- # count=0 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@104 -- # count=0 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@109 -- # return 0 00:27:37.758 01:10:12 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:37.758 01:10:12 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:38.017 malloc_lvol_verify 00:27:38.017 01:10:12 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:38.277 a1875888-8c1d-4165-96bf-fe3629979ff2 00:27:38.277 01:10:12 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:38.277 78148abf-aedc-4405-93a6-a2a7b9a0e9f6 00:27:38.536 01:10:12 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:38.536 /dev/nbd0 00:27:38.795 01:10:12 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:38.795 mke2fs 1.46.5 (30-Dec-2021) 00:27:38.796 00:27:38.796 Filesystem too small for a journal 00:27:38.796 Discarding device blocks: 0/1024 done 00:27:38.796 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:38.796 00:27:38.796 Allocating group tables: 0/1 done 00:27:38.796 Writing inode tables: 0/1 done 00:27:38.796 Writing superblocks and filesystem accounting information: 0/1 done 00:27:38.796 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@51 -- # local i 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:38.796 01:10:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@41 -- # break 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@45 -- # return 0 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:39.055 01:10:13 -- bdev/nbd_common.sh@147 -- # return 0 00:27:39.055 01:10:13 -- bdev/blockdev.sh@324 -- # killprocess 146413 00:27:39.055 01:10:13 -- common/autotest_common.sh@936 -- # '[' -z 146413 ']' 00:27:39.055 01:10:13 -- common/autotest_common.sh@940 -- # kill -0 146413 00:27:39.055 01:10:13 -- common/autotest_common.sh@941 -- # uname 00:27:39.055 01:10:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:39.055 01:10:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146413 00:27:39.055 01:10:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:39.055 01:10:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:39.055 01:10:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146413' 00:27:39.055 killing process with pid 146413 00:27:39.055 01:10:13 -- common/autotest_common.sh@955 -- # kill 146413 00:27:39.055 01:10:13 -- common/autotest_common.sh@960 -- # wait 146413 00:27:39.315 01:10:13 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:39.315 00:27:39.315 real 0m4.649s 00:27:39.315 user 0m6.633s 00:27:39.315 sys 0m1.343s 00:27:39.315 01:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:39.315 01:10:13 -- common/autotest_common.sh@10 -- # set +x 00:27:39.315 ************************************ 00:27:39.315 END TEST bdev_nbd 00:27:39.315 ************************************ 00:27:39.315 01:10:13 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:39.315 01:10:13 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:27:39.315 01:10:13 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:39.315 skipping fio tests on NVMe due to multi-ns failures. 00:27:39.315 01:10:13 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:39.315 01:10:13 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:39.315 01:10:13 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:39.315 01:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:39.315 01:10:13 -- common/autotest_common.sh@10 -- # set +x 00:27:39.574 ************************************ 00:27:39.574 START TEST bdev_verify 00:27:39.574 ************************************ 00:27:39.574 01:10:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:39.574 [2024-11-18 01:10:13.786755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:39.574 [2024-11-18 01:10:13.787120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146587 ] 00:27:39.574 [2024-11-18 01:10:13.935368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.833 [2024-11-18 01:10:14.008256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.833 [2024-11-18 01:10:14.008256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.092 Running I/O for 5 seconds... 00:27:45.367 00:27:45.367 Latency(us) 00:27:45.367 [2024-11-18T01:10:19.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.367 [2024-11-18T01:10:19.766Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:45.367 Verification LBA range: start 0x0 length 0xa0000 00:27:45.367 Nvme0n1 : 5.01 17897.03 69.91 0.00 0.00 7121.44 304.27 16103.13 00:27:45.368 [2024-11-18T01:10:19.767Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:45.368 Verification LBA range: start 0xa0000 length 0xa0000 00:27:45.368 Nvme0n1 : 5.01 17799.85 69.53 0.00 0.00 7159.82 308.18 22719.15 00:27:45.368 [2024-11-18T01:10:19.767Z] =================================================================================================================== 00:27:45.368 [2024-11-18T01:10:19.767Z] Total : 35696.88 139.44 0.00 0.00 7140.58 304.27 22719.15 00:27:55.357 ************************************ 00:27:55.357 END TEST bdev_verify 00:27:55.357 ************************************ 00:27:55.357 00:27:55.357 real 0m14.200s 00:27:55.357 user 0m27.400s 00:27:55.357 sys 0m0.400s 00:27:55.357 01:10:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:55.357 01:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.357 01:10:27 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:55.357 01:10:27 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:55.357 01:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:55.357 01:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.357 ************************************ 00:27:55.357 START TEST bdev_verify_big_io 00:27:55.357 ************************************ 00:27:55.357 01:10:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:55.357 [2024-11-18 01:10:28.054472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:55.357 [2024-11-18 01:10:28.054848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146774 ] 00:27:55.357 [2024-11-18 01:10:28.199702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:55.357 [2024-11-18 01:10:28.272600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.357 [2024-11-18 01:10:28.272597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.357 Running I/O for 5 seconds... 00:27:59.554 00:27:59.554 Latency(us) 00:27:59.554 [2024-11-18T01:10:33.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.554 [2024-11-18T01:10:33.953Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:59.554 Verification LBA range: start 0x0 length 0xa000 00:27:59.554 Nvme0n1 : 5.03 1979.11 123.69 0.00 0.00 63824.38 784.09 116342.00 00:27:59.554 [2024-11-18T01:10:33.953Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:59.554 Verification LBA range: start 0xa000 length 0xa000 00:27:59.554 Nvme0n1 : 5.04 2063.49 128.97 0.00 0.00 61236.69 522.73 97867.09 00:27:59.554 [2024-11-18T01:10:33.953Z] =================================================================================================================== 00:27:59.554 [2024-11-18T01:10:33.953Z] Total : 4042.61 252.66 0.00 0.00 62503.33 522.73 116342.00 00:28:00.123 00:28:00.123 real 0m6.333s 00:28:00.123 user 0m11.780s 00:28:00.123 sys 0m0.302s 00:28:00.123 ************************************ 00:28:00.123 END TEST bdev_verify_big_io 00:28:00.123 ************************************ 00:28:00.123 01:10:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:00.123 01:10:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.123 01:10:34 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:00.123 01:10:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:00.123 01:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:00.123 01:10:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.123 ************************************ 00:28:00.123 START TEST bdev_write_zeroes 00:28:00.123 ************************************ 00:28:00.123 01:10:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:00.123 [2024-11-18 01:10:34.468251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:00.123 [2024-11-18 01:10:34.469514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146877 ] 00:28:00.383 [2024-11-18 01:10:34.624776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.383 [2024-11-18 01:10:34.692185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.642 Running I/O for 1 seconds... 00:28:01.602 00:28:01.602 Latency(us) 00:28:01.602 [2024-11-18T01:10:36.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.602 [2024-11-18T01:10:36.001Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:01.602 Nvme0n1 : 1.00 61873.18 241.69 0.00 0.00 2063.91 678.77 13419.28 00:28:01.602 [2024-11-18T01:10:36.001Z] =================================================================================================================== 00:28:01.602 [2024-11-18T01:10:36.001Z] Total : 61873.18 241.69 0.00 0.00 2063.91 678.77 13419.28 00:28:02.170 00:28:02.170 real 0m1.936s 00:28:02.170 user 0m1.577s 00:28:02.170 sys 0m0.258s 00:28:02.170 01:10:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:02.170 ************************************ 00:28:02.170 END TEST bdev_write_zeroes 00:28:02.170 ************************************ 00:28:02.170 01:10:36 -- common/autotest_common.sh@10 -- # set +x 00:28:02.170 01:10:36 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:02.170 01:10:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:02.170 01:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:02.170 01:10:36 -- common/autotest_common.sh@10 -- # set +x 00:28:02.170 ************************************ 00:28:02.170 START TEST bdev_json_nonenclosed 00:28:02.170 ************************************ 00:28:02.170 01:10:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:02.170 [2024-11-18 01:10:36.464288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:02.170 [2024-11-18 01:10:36.464755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146928 ] 00:28:02.429 [2024-11-18 01:10:36.620099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.429 [2024-11-18 01:10:36.689734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.429 [2024-11-18 01:10:36.690318] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:02.429 [2024-11-18 01:10:36.690531] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:02.688 00:28:02.688 real 0m0.489s 00:28:02.688 user 0m0.243s 00:28:02.688 sys 0m0.145s 00:28:02.688 01:10:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:02.688 01:10:36 -- common/autotest_common.sh@10 -- # set +x 00:28:02.688 ************************************ 00:28:02.688 END TEST bdev_json_nonenclosed 00:28:02.688 ************************************ 00:28:02.688 01:10:36 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:02.688 01:10:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:02.688 01:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:02.688 01:10:36 -- common/autotest_common.sh@10 -- # set +x 00:28:02.688 ************************************ 00:28:02.688 START TEST bdev_json_nonarray 00:28:02.688 ************************************ 00:28:02.689 01:10:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:02.689 [2024-11-18 01:10:37.015964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:02.689 [2024-11-18 01:10:37.016332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146951 ] 00:28:02.947 [2024-11-18 01:10:37.157595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.947 [2024-11-18 01:10:37.223958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.947 [2024-11-18 01:10:37.224463] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:02.947 [2024-11-18 01:10:37.224620] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:03.206 00:28:03.206 real 0m0.445s 00:28:03.206 user 0m0.225s 00:28:03.206 sys 0m0.120s 00:28:03.206 01:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:03.206 ************************************ 00:28:03.206 END TEST bdev_json_nonarray 00:28:03.206 ************************************ 00:28:03.206 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:28:03.206 01:10:37 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:28:03.206 01:10:37 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:28:03.206 01:10:37 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:28:03.206 01:10:37 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:03.206 01:10:37 -- bdev/blockdev.sh@809 -- # cleanup 00:28:03.206 01:10:37 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:03.206 01:10:37 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:03.206 01:10:37 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:28:03.206 01:10:37 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:28:03.206 01:10:37 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:28:03.206 01:10:37 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:28:03.206 ************************************ 00:28:03.206 END TEST blockdev_nvme 00:28:03.206 ************************************ 00:28:03.206 00:28:03.206 real 0m33.260s 00:28:03.206 user 0m54.556s 00:28:03.206 sys 0m4.181s 00:28:03.206 01:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:03.206 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:28:03.206 01:10:37 -- spdk/autotest.sh@206 -- # uname -s 00:28:03.206 01:10:37 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:28:03.206 01:10:37 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:03.206 01:10:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:03.206 01:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:03.206 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:28:03.206 ************************************ 00:28:03.206 START TEST blockdev_nvme_gpt 00:28:03.206 ************************************ 00:28:03.206 01:10:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:03.466 * Looking for test storage... 00:28:03.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:03.466 01:10:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:03.466 01:10:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:03.466 01:10:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:03.466 01:10:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:03.466 01:10:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:03.466 01:10:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:03.466 01:10:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:03.466 01:10:37 -- scripts/common.sh@335 -- # IFS=.-: 00:28:03.466 01:10:37 -- scripts/common.sh@335 -- # read -ra ver1 00:28:03.466 01:10:37 -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.466 01:10:37 -- scripts/common.sh@336 -- # read -ra ver2 00:28:03.466 01:10:37 -- scripts/common.sh@337 -- # local 'op=<' 00:28:03.466 01:10:37 -- scripts/common.sh@339 -- # ver1_l=2 00:28:03.466 01:10:37 -- scripts/common.sh@340 -- # ver2_l=1 00:28:03.466 01:10:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:03.466 01:10:37 -- scripts/common.sh@343 -- # case "$op" in 00:28:03.466 01:10:37 -- scripts/common.sh@344 -- # : 1 00:28:03.466 01:10:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:03.466 01:10:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.466 01:10:37 -- scripts/common.sh@364 -- # decimal 1 00:28:03.466 01:10:37 -- scripts/common.sh@352 -- # local d=1 00:28:03.466 01:10:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.466 01:10:37 -- scripts/common.sh@354 -- # echo 1 00:28:03.466 01:10:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:03.466 01:10:37 -- scripts/common.sh@365 -- # decimal 2 00:28:03.466 01:10:37 -- scripts/common.sh@352 -- # local d=2 00:28:03.466 01:10:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.466 01:10:37 -- scripts/common.sh@354 -- # echo 2 00:28:03.466 01:10:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:03.466 01:10:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:03.466 01:10:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:03.466 01:10:37 -- scripts/common.sh@367 -- # return 0 00:28:03.466 01:10:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.466 01:10:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:03.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.466 --rc genhtml_branch_coverage=1 00:28:03.466 --rc genhtml_function_coverage=1 00:28:03.466 --rc genhtml_legend=1 00:28:03.466 --rc geninfo_all_blocks=1 00:28:03.466 --rc geninfo_unexecuted_blocks=1 00:28:03.466 00:28:03.466 ' 00:28:03.466 01:10:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:03.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.466 --rc genhtml_branch_coverage=1 00:28:03.466 --rc genhtml_function_coverage=1 00:28:03.466 --rc genhtml_legend=1 00:28:03.466 --rc geninfo_all_blocks=1 00:28:03.466 --rc geninfo_unexecuted_blocks=1 00:28:03.466 00:28:03.466 ' 00:28:03.466 01:10:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:03.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.466 --rc genhtml_branch_coverage=1 00:28:03.466 --rc genhtml_function_coverage=1 00:28:03.466 --rc genhtml_legend=1 00:28:03.466 --rc geninfo_all_blocks=1 00:28:03.466 --rc geninfo_unexecuted_blocks=1 00:28:03.466 00:28:03.466 ' 00:28:03.466 01:10:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:03.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.466 --rc genhtml_branch_coverage=1 00:28:03.466 --rc genhtml_function_coverage=1 00:28:03.466 --rc genhtml_legend=1 00:28:03.466 --rc geninfo_all_blocks=1 00:28:03.466 --rc geninfo_unexecuted_blocks=1 00:28:03.466 00:28:03.466 ' 00:28:03.466 01:10:37 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:03.466 01:10:37 -- bdev/nbd_common.sh@6 -- # set -e 00:28:03.466 01:10:37 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:03.466 01:10:37 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:03.466 01:10:37 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:03.466 01:10:37 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:03.466 01:10:37 -- bdev/blockdev.sh@18 -- # : 00:28:03.466 01:10:37 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:03.466 01:10:37 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:03.466 01:10:37 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:03.466 01:10:37 -- bdev/blockdev.sh@672 -- # uname -s 00:28:03.466 01:10:37 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:03.466 01:10:37 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:03.466 01:10:37 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:28:03.466 01:10:37 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:03.466 01:10:37 -- bdev/blockdev.sh@682 -- # dek= 00:28:03.466 01:10:37 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:03.466 01:10:37 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:03.466 01:10:37 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:03.466 01:10:37 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:28:03.466 01:10:37 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:28:03.466 01:10:37 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:03.466 01:10:37 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=147042 00:28:03.466 01:10:37 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:03.466 01:10:37 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:03.466 01:10:37 -- bdev/blockdev.sh@47 -- # waitforlisten 147042 00:28:03.466 01:10:37 -- common/autotest_common.sh@829 -- # '[' -z 147042 ']' 00:28:03.466 01:10:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.466 01:10:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.466 01:10:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.466 01:10:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.466 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:28:03.466 [2024-11-18 01:10:37.837781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:03.466 [2024-11-18 01:10:37.838664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147042 ] 00:28:03.725 [2024-11-18 01:10:37.994533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.725 [2024-11-18 01:10:38.064466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:03.725 [2024-11-18 01:10:38.064709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.663 01:10:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.663 01:10:38 -- common/autotest_common.sh@862 -- # return 0 00:28:04.663 01:10:38 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:04.663 01:10:38 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:28:04.663 01:10:38 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:04.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:04.923 Waiting for block devices as requested 00:28:04.923 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:05.182 01:10:39 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:28:05.182 01:10:39 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:28:05.182 01:10:39 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:28:05.182 01:10:39 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:28:05.182 01:10:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:28:05.182 01:10:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:28:05.182 01:10:39 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:28:05.182 01:10:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:05.182 01:10:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:28:05.182 01:10:39 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:28:05.182 01:10:39 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:28:05.182 01:10:39 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:28:05.182 01:10:39 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:28:05.182 01:10:39 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:28:05.182 01:10:39 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:28:05.182 01:10:39 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:28:05.182 01:10:39 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:28:05.182 BYT; 00:28:05.182 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:28:05.182 01:10:39 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:28:05.182 BYT; 00:28:05.182 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:28:05.182 01:10:39 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:28:05.182 01:10:39 -- bdev/blockdev.sh@114 -- # break 00:28:05.182 01:10:39 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:28:05.182 01:10:39 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:28:05.182 01:10:39 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:05.182 01:10:39 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:28:05.441 01:10:39 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:28:05.441 01:10:39 -- scripts/common.sh@410 -- # local spdk_guid 00:28:05.441 01:10:39 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:05.441 01:10:39 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:05.441 01:10:39 -- scripts/common.sh@415 -- # IFS='()' 00:28:05.441 01:10:39 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:28:05.441 01:10:39 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:05.441 01:10:39 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:28:05.441 01:10:39 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:05.441 01:10:39 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:05.441 01:10:39 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:05.441 01:10:39 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:28:05.441 01:10:39 -- scripts/common.sh@422 -- # local spdk_guid 00:28:05.441 01:10:39 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:05.441 01:10:39 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:05.441 01:10:39 -- scripts/common.sh@427 -- # IFS='()' 00:28:05.441 01:10:39 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:28:05.441 01:10:39 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:05.441 01:10:39 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:28:05.441 01:10:39 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:05.441 01:10:39 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:05.441 01:10:39 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:05.441 01:10:39 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:28:06.379 The operation has completed successfully. 00:28:06.379 01:10:40 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:28:07.757 The operation has completed successfully. 00:28:07.757 01:10:41 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:08.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:08.016 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:09.925 01:10:43 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:28:09.925 01:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.925 01:10:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.925 [] 00:28:09.925 01:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.925 01:10:43 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:28:09.925 01:10:43 -- bdev/blockdev.sh@79 -- # local json 00:28:09.925 01:10:43 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:28:09.925 01:10:43 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:09.925 01:10:43 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:28:09.925 01:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.925 01:10:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.925 01:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.925 01:10:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:09.925 01:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.925 01:10:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.925 01:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.925 01:10:43 -- bdev/blockdev.sh@738 -- # cat 00:28:09.925 01:10:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:09.925 01:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.925 01:10:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.925 01:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.925 01:10:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:09.925 01:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.925 01:10:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.925 01:10:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.926 01:10:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:09.926 01:10:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.926 01:10:44 -- common/autotest_common.sh@10 -- # set +x 00:28:09.926 01:10:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.926 01:10:44 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:09.926 01:10:44 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:09.926 01:10:44 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:09.926 01:10:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.926 01:10:44 -- common/autotest_common.sh@10 -- # set +x 00:28:09.926 01:10:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.926 01:10:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:09.926 01:10:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:28:09.926 01:10:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:09.926 01:10:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:09.926 01:10:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:28:09.926 01:10:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:09.926 01:10:44 -- bdev/blockdev.sh@752 -- # killprocess 147042 00:28:09.926 01:10:44 -- common/autotest_common.sh@936 -- # '[' -z 147042 ']' 00:28:09.926 01:10:44 -- common/autotest_common.sh@940 -- # kill -0 147042 00:28:09.926 01:10:44 -- common/autotest_common.sh@941 -- # uname 00:28:09.926 01:10:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.926 01:10:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147042 00:28:09.926 01:10:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:09.926 01:10:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:09.926 01:10:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147042' 00:28:09.926 killing process with pid 147042 00:28:09.926 01:10:44 -- common/autotest_common.sh@955 -- # kill 147042 00:28:09.926 01:10:44 -- common/autotest_common.sh@960 -- # wait 147042 00:28:10.497 01:10:44 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:10.497 01:10:44 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:10.497 01:10:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:28:10.497 01:10:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:10.497 01:10:44 -- common/autotest_common.sh@10 -- # set +x 00:28:10.497 ************************************ 00:28:10.497 START TEST bdev_hello_world 00:28:10.497 ************************************ 00:28:10.497 01:10:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:10.757 [2024-11-18 01:10:44.905088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:10.757 [2024-11-18 01:10:44.905294] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147466 ] 00:28:10.757 [2024-11-18 01:10:45.045992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.757 [2024-11-18 01:10:45.115962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.016 [2024-11-18 01:10:45.358981] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:11.017 [2024-11-18 01:10:45.359067] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:28:11.017 [2024-11-18 01:10:45.359137] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:11.017 [2024-11-18 01:10:45.361686] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:11.017 [2024-11-18 01:10:45.362471] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:11.017 [2024-11-18 01:10:45.362520] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:11.017 [2024-11-18 01:10:45.362751] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:11.017 00:28:11.017 [2024-11-18 01:10:45.362800] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:11.587 00:28:11.587 real 0m0.907s 00:28:11.587 user 0m0.548s 00:28:11.587 sys 0m0.260s 00:28:11.587 01:10:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:11.587 01:10:45 -- common/autotest_common.sh@10 -- # set +x 00:28:11.587 ************************************ 00:28:11.587 END TEST bdev_hello_world 00:28:11.587 ************************************ 00:28:11.587 01:10:45 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:11.587 01:10:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:11.587 01:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:11.587 01:10:45 -- common/autotest_common.sh@10 -- # set +x 00:28:11.587 ************************************ 00:28:11.587 START TEST bdev_bounds 00:28:11.587 ************************************ 00:28:11.587 01:10:45 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:28:11.587 01:10:45 -- bdev/blockdev.sh@288 -- # bdevio_pid=147504 00:28:11.587 01:10:45 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:11.587 Process bdevio pid: 147504 00:28:11.587 01:10:45 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 147504' 00:28:11.587 01:10:45 -- bdev/blockdev.sh@291 -- # waitforlisten 147504 00:28:11.587 01:10:45 -- common/autotest_common.sh@829 -- # '[' -z 147504 ']' 00:28:11.587 01:10:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.587 01:10:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:11.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.587 01:10:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.587 01:10:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:11.587 01:10:45 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:11.587 01:10:45 -- common/autotest_common.sh@10 -- # set +x 00:28:11.587 [2024-11-18 01:10:45.899474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:11.587 [2024-11-18 01:10:45.899828] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147504 ] 00:28:11.847 [2024-11-18 01:10:46.063057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:11.847 [2024-11-18 01:10:46.132503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.847 [2024-11-18 01:10:46.132687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.847 [2024-11-18 01:10:46.132692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.417 01:10:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.417 01:10:46 -- common/autotest_common.sh@862 -- # return 0 00:28:12.417 01:10:46 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:12.678 I/O targets: 00:28:12.678 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:28:12.678 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:28:12.678 00:28:12.678 00:28:12.678 CUnit - A unit testing framework for C - Version 2.1-3 00:28:12.678 http://cunit.sourceforge.net/ 00:28:12.678 00:28:12.678 00:28:12.678 Suite: bdevio tests on: Nvme0n1p2 00:28:12.678 Test: blockdev write read block ...passed 00:28:12.678 Test: blockdev write zeroes read block ...passed 00:28:12.678 Test: blockdev write zeroes read no split ...passed 00:28:12.678 Test: blockdev write zeroes read split ...passed 00:28:12.678 Test: blockdev write zeroes read split partial ...passed 00:28:12.678 Test: blockdev reset ...[2024-11-18 01:10:46.858236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:12.678 [2024-11-18 01:10:46.860318] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.678 passed 00:28:12.678 Test: blockdev write read 8 blocks ...passed 00:28:12.678 Test: blockdev write read size > 128k ...passed 00:28:12.678 Test: blockdev write read invalid size ...passed 00:28:12.678 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:12.678 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:12.678 Test: blockdev write read max offset ...passed 00:28:12.678 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:12.678 Test: blockdev writev readv 8 blocks ...passed 00:28:12.678 Test: blockdev writev readv 30 x 1block ...passed 00:28:12.678 Test: blockdev writev readv block ...passed 00:28:12.678 Test: blockdev writev readv size > 128k ...passed 00:28:12.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:12.678 Test: blockdev comparev and writev ...[2024-11-18 01:10:46.866629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x8c00b000 len:0x1000 00:28:12.678 passed 00:28:12.678 Test: blockdev nvme passthru rw ...passed 00:28:12.678 Test: blockdev nvme passthru vendor specific ...passed 00:28:12.678 Test: blockdev nvme admin passthru ...passed 00:28:12.678 Test: blockdev copy ...[2024-11-18 01:10:46.866741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:12.678 passed 00:28:12.678 Suite: bdevio tests on: Nvme0n1p1 00:28:12.678 Test: blockdev write read block ...passed 00:28:12.678 Test: blockdev write zeroes read block ...passed 00:28:12.678 Test: blockdev write zeroes read no split ...passed 00:28:12.678 Test: blockdev write zeroes read split ...passed 00:28:12.678 Test: blockdev write zeroes read split partial ...passed 00:28:12.678 Test: blockdev reset ...[2024-11-18 01:10:46.880634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:12.678 [2024-11-18 01:10:46.882514] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.678 passed 00:28:12.678 Test: blockdev write read 8 blocks ...passed 00:28:12.678 Test: blockdev write read size > 128k ...passed 00:28:12.678 Test: blockdev write read invalid size ...passed 00:28:12.678 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:12.678 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:12.678 Test: blockdev write read max offset ...passed 00:28:12.678 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:12.678 Test: blockdev writev readv 8 blocks ...passed 00:28:12.678 Test: blockdev writev readv 30 x 1block ...passed 00:28:12.678 Test: blockdev writev readv block ...passed 00:28:12.678 Test: blockdev writev readv size > 128k ...passed 00:28:12.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:12.678 Test: blockdev comparev and writev ...[2024-11-18 01:10:46.888353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x8c00d000 len:0x1000 00:28:12.678 [2024-11-18 01:10:46.888420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:12.678 passed 00:28:12.678 Test: blockdev nvme passthru rw ...passed 00:28:12.679 Test: blockdev nvme passthru vendor specific ...passed 00:28:12.679 Test: blockdev nvme admin passthru ...passed 00:28:12.679 Test: blockdev copy ...passed 00:28:12.679 00:28:12.679 Run Summary: Type Total Ran Passed Failed Inactive 00:28:12.679 suites 2 2 n/a 0 0 00:28:12.679 tests 46 46 46 0 0 00:28:12.679 asserts 284 284 284 0 n/a 00:28:12.679 00:28:12.679 Elapsed time = 0.109 seconds 00:28:12.679 0 00:28:12.679 01:10:46 -- bdev/blockdev.sh@293 -- # killprocess 147504 00:28:12.679 01:10:46 -- common/autotest_common.sh@936 -- # '[' -z 147504 ']' 00:28:12.679 01:10:46 -- common/autotest_common.sh@940 -- # kill -0 147504 00:28:12.679 01:10:46 -- common/autotest_common.sh@941 -- # uname 00:28:12.679 01:10:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:12.679 01:10:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147504 00:28:12.679 01:10:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:12.679 killing process with pid 147504 00:28:12.679 01:10:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:12.679 01:10:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147504' 00:28:12.679 01:10:46 -- common/autotest_common.sh@955 -- # kill 147504 00:28:12.679 01:10:46 -- common/autotest_common.sh@960 -- # wait 147504 00:28:12.938 01:10:47 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:28:12.938 00:28:12.938 real 0m1.484s 00:28:12.938 user 0m3.449s 00:28:12.938 sys 0m0.367s 00:28:12.938 ************************************ 00:28:12.938 END TEST bdev_bounds 00:28:12.938 ************************************ 00:28:12.938 01:10:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:12.938 01:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:13.199 01:10:47 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:13.199 01:10:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:28:13.199 01:10:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:13.199 01:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:13.199 ************************************ 00:28:13.199 START TEST bdev_nbd 00:28:13.199 ************************************ 00:28:13.199 01:10:47 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:13.199 01:10:47 -- bdev/blockdev.sh@298 -- # uname -s 00:28:13.199 01:10:47 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:28:13.199 01:10:47 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:13.199 01:10:47 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:13.199 01:10:47 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:28:13.199 01:10:47 -- bdev/blockdev.sh@302 -- # local bdev_all 00:28:13.199 01:10:47 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:28:13.199 01:10:47 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:28:13.199 01:10:47 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:13.199 01:10:47 -- bdev/blockdev.sh@309 -- # local nbd_all 00:28:13.199 01:10:47 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:28:13.199 01:10:47 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:13.199 01:10:47 -- bdev/blockdev.sh@312 -- # local nbd_list 00:28:13.199 01:10:47 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:13.199 01:10:47 -- bdev/blockdev.sh@313 -- # local bdev_list 00:28:13.199 01:10:47 -- bdev/blockdev.sh@316 -- # nbd_pid=147554 00:28:13.199 01:10:47 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:13.199 01:10:47 -- bdev/blockdev.sh@318 -- # waitforlisten 147554 /var/tmp/spdk-nbd.sock 00:28:13.199 01:10:47 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:13.199 01:10:47 -- common/autotest_common.sh@829 -- # '[' -z 147554 ']' 00:28:13.199 01:10:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:13.199 01:10:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:13.199 01:10:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:13.199 01:10:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.199 01:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:13.199 [2024-11-18 01:10:47.445861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:13.199 [2024-11-18 01:10:47.446090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.199 [2024-11-18 01:10:47.586407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.459 [2024-11-18 01:10:47.658688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.029 01:10:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.029 01:10:48 -- common/autotest_common.sh@862 -- # return 0 00:28:14.029 01:10:48 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@24 -- # local i 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:14.029 01:10:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:28:14.289 01:10:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:14.289 01:10:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:14.289 01:10:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:14.289 01:10:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:14.289 01:10:48 -- common/autotest_common.sh@867 -- # local i 00:28:14.289 01:10:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:14.289 01:10:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:14.290 01:10:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:14.290 01:10:48 -- common/autotest_common.sh@871 -- # break 00:28:14.290 01:10:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:14.290 01:10:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:14.290 01:10:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:14.290 1+0 records in 00:28:14.290 1+0 records out 00:28:14.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407297 s, 10.1 MB/s 00:28:14.290 01:10:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.290 01:10:48 -- common/autotest_common.sh@884 -- # size=4096 00:28:14.290 01:10:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.290 01:10:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:14.290 01:10:48 -- common/autotest_common.sh@887 -- # return 0 00:28:14.290 01:10:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:14.290 01:10:48 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:14.290 01:10:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:28:14.550 01:10:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:14.550 01:10:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:14.550 01:10:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:14.550 01:10:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:14.550 01:10:48 -- common/autotest_common.sh@867 -- # local i 00:28:14.550 01:10:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:14.550 01:10:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:14.550 01:10:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:14.550 01:10:48 -- common/autotest_common.sh@871 -- # break 00:28:14.550 01:10:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:14.550 01:10:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:14.550 01:10:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:14.550 1+0 records in 00:28:14.550 1+0 records out 00:28:14.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796328 s, 5.1 MB/s 00:28:14.550 01:10:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.550 01:10:48 -- common/autotest_common.sh@884 -- # size=4096 00:28:14.550 01:10:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.550 01:10:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:14.550 01:10:48 -- common/autotest_common.sh@887 -- # return 0 00:28:14.550 01:10:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:14.550 01:10:48 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:14.550 01:10:48 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:14.809 01:10:49 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:14.809 { 00:28:14.809 "nbd_device": "/dev/nbd0", 00:28:14.809 "bdev_name": "Nvme0n1p1" 00:28:14.809 }, 00:28:14.809 { 00:28:14.809 "nbd_device": "/dev/nbd1", 00:28:14.809 "bdev_name": "Nvme0n1p2" 00:28:14.809 } 00:28:14.809 ]' 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:14.810 { 00:28:14.810 "nbd_device": "/dev/nbd0", 00:28:14.810 "bdev_name": "Nvme0n1p1" 00:28:14.810 }, 00:28:14.810 { 00:28:14.810 "nbd_device": "/dev/nbd1", 00:28:14.810 "bdev_name": "Nvme0n1p2" 00:28:14.810 } 00:28:14.810 ]' 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@51 -- # local i 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:14.810 01:10:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@41 -- # break 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@45 -- # return 0 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:15.068 01:10:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@41 -- # break 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@45 -- # return 0 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:15.327 01:10:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@65 -- # true 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@65 -- # count=0 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@122 -- # count=0 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@127 -- # return 0 00:28:15.587 01:10:49 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@12 -- # local i 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:15.587 01:10:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:28:15.846 /dev/nbd0 00:28:16.106 01:10:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:16.106 01:10:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:16.106 01:10:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:16.106 01:10:50 -- common/autotest_common.sh@867 -- # local i 00:28:16.106 01:10:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:16.106 01:10:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:16.106 01:10:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:16.106 01:10:50 -- common/autotest_common.sh@871 -- # break 00:28:16.106 01:10:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:16.106 01:10:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:16.107 01:10:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:16.107 1+0 records in 00:28:16.107 1+0 records out 00:28:16.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395542 s, 10.4 MB/s 00:28:16.107 01:10:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.107 01:10:50 -- common/autotest_common.sh@884 -- # size=4096 00:28:16.107 01:10:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.107 01:10:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:16.107 01:10:50 -- common/autotest_common.sh@887 -- # return 0 00:28:16.107 01:10:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:16.107 01:10:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:16.107 01:10:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:28:16.367 /dev/nbd1 00:28:16.367 01:10:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:16.367 01:10:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:16.367 01:10:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:16.367 01:10:50 -- common/autotest_common.sh@867 -- # local i 00:28:16.367 01:10:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:16.367 01:10:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:16.367 01:10:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:16.367 01:10:50 -- common/autotest_common.sh@871 -- # break 00:28:16.367 01:10:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:16.367 01:10:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:16.367 01:10:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:16.367 1+0 records in 00:28:16.367 1+0 records out 00:28:16.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490617 s, 8.3 MB/s 00:28:16.367 01:10:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.367 01:10:50 -- common/autotest_common.sh@884 -- # size=4096 00:28:16.367 01:10:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.367 01:10:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:16.367 01:10:50 -- common/autotest_common.sh@887 -- # return 0 00:28:16.367 01:10:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:16.367 01:10:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:16.367 01:10:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:16.367 01:10:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:16.367 01:10:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:16.640 { 00:28:16.640 "nbd_device": "/dev/nbd0", 00:28:16.640 "bdev_name": "Nvme0n1p1" 00:28:16.640 }, 00:28:16.640 { 00:28:16.640 "nbd_device": "/dev/nbd1", 00:28:16.640 "bdev_name": "Nvme0n1p2" 00:28:16.640 } 00:28:16.640 ]' 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:16.640 { 00:28:16.640 "nbd_device": "/dev/nbd0", 00:28:16.640 "bdev_name": "Nvme0n1p1" 00:28:16.640 }, 00:28:16.640 { 00:28:16.640 "nbd_device": "/dev/nbd1", 00:28:16.640 "bdev_name": "Nvme0n1p2" 00:28:16.640 } 00:28:16.640 ]' 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:16.640 /dev/nbd1' 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:16.640 /dev/nbd1' 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@65 -- # count=2 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@95 -- # count=2 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:16.640 256+0 records in 00:28:16.640 256+0 records out 00:28:16.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118897 s, 88.2 MB/s 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:16.640 256+0 records in 00:28:16.640 256+0 records out 00:28:16.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0724664 s, 14.5 MB/s 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:16.640 01:10:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:16.913 256+0 records in 00:28:16.913 256+0 records out 00:28:16.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781053 s, 13.4 MB/s 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@51 -- # local i 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:16.913 01:10:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@41 -- # break 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@45 -- # return 0 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:17.173 01:10:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@41 -- # break 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@45 -- # return 0 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.433 01:10:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@65 -- # true 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@65 -- # count=0 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@104 -- # count=0 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@109 -- # return 0 00:28:17.693 01:10:51 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:17.693 01:10:51 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:17.693 malloc_lvol_verify 00:28:17.693 01:10:52 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:17.954 afb032c3-0166-4022-935e-b8220f93934d 00:28:17.955 01:10:52 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:18.214 f0e1e01b-1e6d-4e52-a203-70fa8d0054f8 00:28:18.214 01:10:52 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:18.474 /dev/nbd0 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:18.474 mke2fs 1.46.5 (30-Dec-2021) 00:28:18.474 00:28:18.474 Filesystem too small for a journal 00:28:18.474 Discarding device blocks: 0/1024 done 00:28:18.474 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:18.474 00:28:18.474 Allocating group tables: 0/1 done 00:28:18.474 Writing inode tables: 0/1 done 00:28:18.474 Writing superblocks and filesystem accounting information: 0/1 done 00:28:18.474 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@51 -- # local i 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:18.474 01:10:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@41 -- # break 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:18.734 01:10:52 -- bdev/nbd_common.sh@147 -- # return 0 00:28:18.735 01:10:52 -- bdev/blockdev.sh@324 -- # killprocess 147554 00:28:18.735 01:10:52 -- common/autotest_common.sh@936 -- # '[' -z 147554 ']' 00:28:18.735 01:10:52 -- common/autotest_common.sh@940 -- # kill -0 147554 00:28:18.735 01:10:52 -- common/autotest_common.sh@941 -- # uname 00:28:18.735 01:10:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:18.735 01:10:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147554 00:28:18.735 01:10:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:18.735 killing process with pid 147554 00:28:18.735 01:10:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:18.735 01:10:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147554' 00:28:18.735 01:10:53 -- common/autotest_common.sh@955 -- # kill 147554 00:28:18.735 01:10:53 -- common/autotest_common.sh@960 -- # wait 147554 00:28:19.306 01:10:53 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:19.306 00:28:19.306 real 0m6.058s 00:28:19.306 user 0m8.533s 00:28:19.306 sys 0m2.072s 00:28:19.306 ************************************ 00:28:19.306 END TEST bdev_nbd 00:28:19.306 ************************************ 00:28:19.306 01:10:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:19.306 01:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.306 01:10:53 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:19.306 01:10:53 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:28:19.306 01:10:53 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:28:19.306 skipping fio tests on NVMe due to multi-ns failures. 00:28:19.306 01:10:53 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:19.306 01:10:53 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:19.306 01:10:53 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:19.306 01:10:53 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:28:19.306 01:10:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:19.306 01:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.306 ************************************ 00:28:19.306 START TEST bdev_verify 00:28:19.306 ************************************ 00:28:19.306 01:10:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:19.306 [2024-11-18 01:10:53.576228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:19.306 [2024-11-18 01:10:53.576519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147801 ] 00:28:19.567 [2024-11-18 01:10:53.735384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:19.567 [2024-11-18 01:10:53.831701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.567 [2024-11-18 01:10:53.831705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.827 Running I/O for 5 seconds... 00:28:25.103 00:28:25.103 Latency(us) 00:28:25.103 [2024-11-18T01:10:59.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.103 [2024-11-18T01:10:59.502Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:25.104 Verification LBA range: start 0x0 length 0x4ff80 00:28:25.104 Nvme0n1p1 : 5.01 7263.76 28.37 0.00 0.00 17577.24 1474.56 29085.50 00:28:25.104 [2024-11-18T01:10:59.503Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:25.104 Verification LBA range: start 0x4ff80 length 0x4ff80 00:28:25.104 Nvme0n1p1 : 5.01 5045.30 19.71 0.00 0.00 25308.47 1552.58 32955.25 00:28:25.104 [2024-11-18T01:10:59.503Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:25.104 Verification LBA range: start 0x0 length 0x4ff7f 00:28:25.104 Nvme0n1p2 : 5.02 7260.46 28.36 0.00 0.00 17562.29 2621.44 27213.04 00:28:25.104 [2024-11-18T01:10:59.503Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:25.104 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:28:25.104 Nvme0n1p2 : 5.02 5042.89 19.70 0.00 0.00 25287.88 2808.69 32955.25 00:28:25.104 [2024-11-18T01:10:59.503Z] =================================================================================================================== 00:28:25.104 [2024-11-18T01:10:59.503Z] Total : 24612.42 96.14 0.00 0.00 20737.39 1474.56 32955.25 00:28:27.639 00:28:27.639 real 0m8.444s 00:28:27.639 user 0m15.872s 00:28:27.639 sys 0m0.367s 00:28:27.640 01:11:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:27.640 01:11:01 -- common/autotest_common.sh@10 -- # set +x 00:28:27.640 ************************************ 00:28:27.640 END TEST bdev_verify 00:28:27.640 ************************************ 00:28:27.640 01:11:02 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:27.640 01:11:02 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:28:27.640 01:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:27.640 01:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:27.640 ************************************ 00:28:27.640 START TEST bdev_verify_big_io 00:28:27.640 ************************************ 00:28:27.640 01:11:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:27.899 [2024-11-18 01:11:02.098611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:27.899 [2024-11-18 01:11:02.098904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147915 ] 00:28:27.899 [2024-11-18 01:11:02.256639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:28.158 [2024-11-18 01:11:02.333164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.158 [2024-11-18 01:11:02.333164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.417 Running I/O for 5 seconds... 00:28:33.767 00:28:33.767 Latency(us) 00:28:33.767 [2024-11-18T01:11:08.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.767 [2024-11-18T01:11:08.166Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:33.767 Verification LBA range: start 0x0 length 0x4ff8 00:28:33.767 Nvme0n1p1 : 5.11 960.69 60.04 0.00 0.00 131816.38 3620.08 211712.49 00:28:33.767 [2024-11-18T01:11:08.166Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:33.767 Verification LBA range: start 0x4ff8 length 0x4ff8 00:28:33.767 Nvme0n1p1 : 5.11 893.20 55.82 0.00 0.00 141882.47 3510.86 230686.72 00:28:33.767 [2024-11-18T01:11:08.166Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:33.767 Verification LBA range: start 0x0 length 0x4ff7 00:28:33.767 Nvme0n1p2 : 5.11 967.79 60.49 0.00 0.00 129657.51 702.17 215707.06 00:28:33.767 [2024-11-18T01:11:08.166Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:33.767 Verification LBA range: start 0x4ff7 length 0x4ff7 00:28:33.767 Nvme0n1p2 : 5.11 892.75 55.80 0.00 0.00 140333.95 4056.99 238675.87 00:28:33.767 [2024-11-18T01:11:08.166Z] =================================================================================================================== 00:28:33.767 [2024-11-18T01:11:08.166Z] Total : 3714.43 232.15 0.00 0.00 135721.97 702.17 238675.87 00:28:34.026 00:28:34.026 real 0m6.381s 00:28:34.026 user 0m11.841s 00:28:34.026 sys 0m0.303s 00:28:34.026 01:11:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.026 01:11:08 -- common/autotest_common.sh@10 -- # set +x 00:28:34.026 ************************************ 00:28:34.026 END TEST bdev_verify_big_io 00:28:34.026 ************************************ 00:28:34.286 01:11:08 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:34.286 01:11:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:34.286 01:11:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.286 01:11:08 -- common/autotest_common.sh@10 -- # set +x 00:28:34.286 ************************************ 00:28:34.286 START TEST bdev_write_zeroes 00:28:34.286 ************************************ 00:28:34.286 01:11:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:34.286 [2024-11-18 01:11:08.529072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:34.286 [2024-11-18 01:11:08.529284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148009 ] 00:28:34.286 [2024-11-18 01:11:08.673312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.545 [2024-11-18 01:11:08.747884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.804 Running I/O for 1 seconds... 00:28:35.736 00:28:35.736 Latency(us) 00:28:35.736 [2024-11-18T01:11:10.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.736 [2024-11-18T01:11:10.135Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:35.736 Nvme0n1p1 : 1.01 27890.48 108.95 0.00 0.00 4580.34 2605.84 13232.03 00:28:35.736 [2024-11-18T01:11:10.135Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:35.736 Nvme0n1p2 : 1.01 27859.38 108.83 0.00 0.00 4579.69 2543.42 13606.52 00:28:35.736 [2024-11-18T01:11:10.135Z] =================================================================================================================== 00:28:35.736 [2024-11-18T01:11:10.135Z] Total : 55749.87 217.77 0.00 0.00 4580.01 2543.42 13606.52 00:28:36.305 00:28:36.305 real 0m1.957s 00:28:36.305 user 0m1.585s 00:28:36.305 sys 0m0.272s 00:28:36.305 01:11:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:36.305 01:11:10 -- common/autotest_common.sh@10 -- # set +x 00:28:36.305 ************************************ 00:28:36.305 END TEST bdev_write_zeroes 00:28:36.305 ************************************ 00:28:36.305 01:11:10 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:36.305 01:11:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:36.305 01:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:36.305 01:11:10 -- common/autotest_common.sh@10 -- # set +x 00:28:36.305 ************************************ 00:28:36.305 START TEST bdev_json_nonenclosed 00:28:36.305 ************************************ 00:28:36.305 01:11:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:36.305 [2024-11-18 01:11:10.575762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:36.305 [2024-11-18 01:11:10.576017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148062 ] 00:28:36.565 [2024-11-18 01:11:10.732143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.565 [2024-11-18 01:11:10.806903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.565 [2024-11-18 01:11:10.807278] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:36.565 [2024-11-18 01:11:10.807418] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:36.825 00:28:36.825 real 0m0.506s 00:28:36.825 user 0m0.266s 00:28:36.825 sys 0m0.139s 00:28:36.825 01:11:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:36.825 01:11:11 -- common/autotest_common.sh@10 -- # set +x 00:28:36.825 ************************************ 00:28:36.825 END TEST bdev_json_nonenclosed 00:28:36.825 ************************************ 00:28:36.825 01:11:11 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:36.825 01:11:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:36.825 01:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:36.825 01:11:11 -- common/autotest_common.sh@10 -- # set +x 00:28:36.825 ************************************ 00:28:36.825 START TEST bdev_json_nonarray 00:28:36.825 ************************************ 00:28:36.825 01:11:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:36.825 [2024-11-18 01:11:11.133382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:36.825 [2024-11-18 01:11:11.133772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148094 ] 00:28:37.084 [2024-11-18 01:11:11.277356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.084 [2024-11-18 01:11:11.350651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.084 [2024-11-18 01:11:11.351159] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:37.084 [2024-11-18 01:11:11.351310] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:37.343 00:28:37.343 real 0m0.474s 00:28:37.343 user 0m0.226s 00:28:37.343 sys 0m0.148s 00:28:37.344 01:11:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:37.344 01:11:11 -- common/autotest_common.sh@10 -- # set +x 00:28:37.344 ************************************ 00:28:37.344 END TEST bdev_json_nonarray 00:28:37.344 ************************************ 00:28:37.344 01:11:11 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:28:37.344 01:11:11 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:28:37.344 01:11:11 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:28:37.344 01:11:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:37.344 01:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:37.344 01:11:11 -- common/autotest_common.sh@10 -- # set +x 00:28:37.344 ************************************ 00:28:37.344 START TEST bdev_gpt_uuid 00:28:37.344 ************************************ 00:28:37.344 01:11:11 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:28:37.344 01:11:11 -- bdev/blockdev.sh@612 -- # local bdev 00:28:37.344 01:11:11 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:28:37.344 01:11:11 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=148116 00:28:37.344 01:11:11 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:37.344 01:11:11 -- bdev/blockdev.sh@47 -- # waitforlisten 148116 00:28:37.344 01:11:11 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:37.344 01:11:11 -- common/autotest_common.sh@829 -- # '[' -z 148116 ']' 00:28:37.344 01:11:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.344 01:11:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.344 01:11:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.344 01:11:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.344 01:11:11 -- common/autotest_common.sh@10 -- # set +x 00:28:37.344 [2024-11-18 01:11:11.718418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:37.344 [2024-11-18 01:11:11.718743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148116 ] 00:28:37.603 [2024-11-18 01:11:11.876492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.603 [2024-11-18 01:11:11.965559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:37.603 [2024-11-18 01:11:11.965973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.543 01:11:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.543 01:11:12 -- common/autotest_common.sh@862 -- # return 0 00:28:38.543 01:11:12 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:38.543 01:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.543 01:11:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.543 Some configs were skipped because the RPC state that can call them passed over. 00:28:38.543 01:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.543 01:11:12 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:28:38.543 01:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.543 01:11:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.543 01:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.543 01:11:12 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:28:38.543 01:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.543 01:11:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.543 01:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.543 01:11:12 -- bdev/blockdev.sh@619 -- # bdev='[ 00:28:38.543 { 00:28:38.543 "name": "Nvme0n1p1", 00:28:38.543 "aliases": [ 00:28:38.543 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:28:38.543 ], 00:28:38.543 "product_name": "GPT Disk", 00:28:38.543 "block_size": 4096, 00:28:38.543 "num_blocks": 655104, 00:28:38.543 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:38.543 "assigned_rate_limits": { 00:28:38.543 "rw_ios_per_sec": 0, 00:28:38.543 "rw_mbytes_per_sec": 0, 00:28:38.543 "r_mbytes_per_sec": 0, 00:28:38.543 "w_mbytes_per_sec": 0 00:28:38.543 }, 00:28:38.543 "claimed": false, 00:28:38.543 "zoned": false, 00:28:38.543 "supported_io_types": { 00:28:38.543 "read": true, 00:28:38.543 "write": true, 00:28:38.543 "unmap": true, 00:28:38.543 "write_zeroes": true, 00:28:38.543 "flush": true, 00:28:38.543 "reset": true, 00:28:38.543 "compare": true, 00:28:38.543 "compare_and_write": false, 00:28:38.543 "abort": true, 00:28:38.543 "nvme_admin": false, 00:28:38.543 "nvme_io": false 00:28:38.543 }, 00:28:38.543 "driver_specific": { 00:28:38.543 "gpt": { 00:28:38.543 "base_bdev": "Nvme0n1", 00:28:38.543 "offset_blocks": 256, 00:28:38.543 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:28:38.543 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:38.543 "partition_name": "SPDK_TEST_first" 00:28:38.543 } 00:28:38.543 } 00:28:38.543 } 00:28:38.543 ]' 00:28:38.543 01:11:12 -- bdev/blockdev.sh@620 -- # jq -r length 00:28:38.543 01:11:12 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:28:38.543 01:11:12 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:28:38.543 01:11:12 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:38.543 01:11:12 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:38.543 01:11:12 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:38.543 01:11:12 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:38.543 01:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.543 01:11:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.803 01:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.803 01:11:12 -- bdev/blockdev.sh@624 -- # bdev='[ 00:28:38.803 { 00:28:38.803 "name": "Nvme0n1p2", 00:28:38.803 "aliases": [ 00:28:38.803 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:28:38.803 ], 00:28:38.803 "product_name": "GPT Disk", 00:28:38.804 "block_size": 4096, 00:28:38.804 "num_blocks": 655103, 00:28:38.804 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:38.804 "assigned_rate_limits": { 00:28:38.804 "rw_ios_per_sec": 0, 00:28:38.804 "rw_mbytes_per_sec": 0, 00:28:38.804 "r_mbytes_per_sec": 0, 00:28:38.804 "w_mbytes_per_sec": 0 00:28:38.804 }, 00:28:38.804 "claimed": false, 00:28:38.804 "zoned": false, 00:28:38.804 "supported_io_types": { 00:28:38.804 "read": true, 00:28:38.804 "write": true, 00:28:38.804 "unmap": true, 00:28:38.804 "write_zeroes": true, 00:28:38.804 "flush": true, 00:28:38.804 "reset": true, 00:28:38.804 "compare": true, 00:28:38.804 "compare_and_write": false, 00:28:38.804 "abort": true, 00:28:38.804 "nvme_admin": false, 00:28:38.804 "nvme_io": false 00:28:38.804 }, 00:28:38.804 "driver_specific": { 00:28:38.804 "gpt": { 00:28:38.804 "base_bdev": "Nvme0n1", 00:28:38.804 "offset_blocks": 655360, 00:28:38.804 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:28:38.804 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:38.804 "partition_name": "SPDK_TEST_second" 00:28:38.804 } 00:28:38.804 } 00:28:38.804 } 00:28:38.804 ]' 00:28:38.804 01:11:12 -- bdev/blockdev.sh@625 -- # jq -r length 00:28:38.804 01:11:12 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:28:38.804 01:11:12 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:28:38.804 01:11:13 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:38.804 01:11:13 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:38.804 01:11:13 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:38.804 01:11:13 -- bdev/blockdev.sh@629 -- # killprocess 148116 00:28:38.804 01:11:13 -- common/autotest_common.sh@936 -- # '[' -z 148116 ']' 00:28:38.804 01:11:13 -- common/autotest_common.sh@940 -- # kill -0 148116 00:28:38.804 01:11:13 -- common/autotest_common.sh@941 -- # uname 00:28:38.804 01:11:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:38.804 01:11:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148116 00:28:38.804 01:11:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:38.804 01:11:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:38.804 01:11:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148116' 00:28:38.804 killing process with pid 148116 00:28:38.804 01:11:13 -- common/autotest_common.sh@955 -- # kill 148116 00:28:38.804 01:11:13 -- common/autotest_common.sh@960 -- # wait 148116 00:28:39.743 00:28:39.743 real 0m2.199s 00:28:39.743 user 0m2.299s 00:28:39.743 sys 0m0.634s 00:28:39.743 01:11:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:39.743 01:11:13 -- common/autotest_common.sh@10 -- # set +x 00:28:39.743 ************************************ 00:28:39.743 END TEST bdev_gpt_uuid 00:28:39.743 ************************************ 00:28:39.743 01:11:13 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:28:39.743 01:11:13 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:39.743 01:11:13 -- bdev/blockdev.sh@809 -- # cleanup 00:28:39.743 01:11:13 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:39.743 01:11:13 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:39.743 01:11:13 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:28:39.743 01:11:13 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:28:39.743 01:11:13 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:28:39.743 01:11:13 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:40.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:40.002 Waiting for block devices as requested 00:28:40.262 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:40.262 01:11:14 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:28:40.262 01:11:14 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:28:40.262 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:28:40.262 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:28:40.262 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:28:40.262 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:28:40.262 01:11:14 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:28:40.262 ************************************ 00:28:40.262 END TEST blockdev_nvme_gpt 00:28:40.262 ************************************ 00:28:40.262 00:28:40.262 real 0m37.004s 00:28:40.262 user 0m52.959s 00:28:40.262 sys 0m8.274s 00:28:40.262 01:11:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:40.262 01:11:14 -- common/autotest_common.sh@10 -- # set +x 00:28:40.262 01:11:14 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:40.262 01:11:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:40.262 01:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.262 01:11:14 -- common/autotest_common.sh@10 -- # set +x 00:28:40.262 ************************************ 00:28:40.262 START TEST nvme 00:28:40.262 ************************************ 00:28:40.262 01:11:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:40.523 * Looking for test storage... 00:28:40.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:40.523 01:11:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:40.523 01:11:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:40.523 01:11:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:40.523 01:11:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:40.523 01:11:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:40.523 01:11:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:40.523 01:11:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:40.523 01:11:14 -- scripts/common.sh@335 -- # IFS=.-: 00:28:40.523 01:11:14 -- scripts/common.sh@335 -- # read -ra ver1 00:28:40.523 01:11:14 -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.523 01:11:14 -- scripts/common.sh@336 -- # read -ra ver2 00:28:40.523 01:11:14 -- scripts/common.sh@337 -- # local 'op=<' 00:28:40.523 01:11:14 -- scripts/common.sh@339 -- # ver1_l=2 00:28:40.523 01:11:14 -- scripts/common.sh@340 -- # ver2_l=1 00:28:40.523 01:11:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:40.523 01:11:14 -- scripts/common.sh@343 -- # case "$op" in 00:28:40.523 01:11:14 -- scripts/common.sh@344 -- # : 1 00:28:40.523 01:11:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:40.523 01:11:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.523 01:11:14 -- scripts/common.sh@364 -- # decimal 1 00:28:40.523 01:11:14 -- scripts/common.sh@352 -- # local d=1 00:28:40.523 01:11:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.523 01:11:14 -- scripts/common.sh@354 -- # echo 1 00:28:40.523 01:11:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:40.523 01:11:14 -- scripts/common.sh@365 -- # decimal 2 00:28:40.523 01:11:14 -- scripts/common.sh@352 -- # local d=2 00:28:40.523 01:11:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.523 01:11:14 -- scripts/common.sh@354 -- # echo 2 00:28:40.523 01:11:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:40.523 01:11:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:40.523 01:11:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:40.523 01:11:14 -- scripts/common.sh@367 -- # return 0 00:28:40.523 01:11:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.523 01:11:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.523 --rc genhtml_branch_coverage=1 00:28:40.523 --rc genhtml_function_coverage=1 00:28:40.523 --rc genhtml_legend=1 00:28:40.523 --rc geninfo_all_blocks=1 00:28:40.523 --rc geninfo_unexecuted_blocks=1 00:28:40.523 00:28:40.523 ' 00:28:40.523 01:11:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.523 --rc genhtml_branch_coverage=1 00:28:40.523 --rc genhtml_function_coverage=1 00:28:40.523 --rc genhtml_legend=1 00:28:40.523 --rc geninfo_all_blocks=1 00:28:40.523 --rc geninfo_unexecuted_blocks=1 00:28:40.523 00:28:40.523 ' 00:28:40.523 01:11:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.523 --rc genhtml_branch_coverage=1 00:28:40.523 --rc genhtml_function_coverage=1 00:28:40.523 --rc genhtml_legend=1 00:28:40.523 --rc geninfo_all_blocks=1 00:28:40.523 --rc geninfo_unexecuted_blocks=1 00:28:40.523 00:28:40.523 ' 00:28:40.523 01:11:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.523 --rc genhtml_branch_coverage=1 00:28:40.523 --rc genhtml_function_coverage=1 00:28:40.523 --rc genhtml_legend=1 00:28:40.523 --rc geninfo_all_blocks=1 00:28:40.523 --rc geninfo_unexecuted_blocks=1 00:28:40.523 00:28:40.523 ' 00:28:40.523 01:11:14 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:41.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:41.092 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:43.632 01:11:17 -- nvme/nvme.sh@79 -- # uname 00:28:43.632 01:11:17 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:28:43.632 01:11:17 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:28:43.632 01:11:17 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:28:43.632 01:11:17 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:28:43.632 01:11:17 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:28:43.632 01:11:17 -- common/autotest_common.sh@1055 -- # echo 0 00:28:43.632 01:11:17 -- common/autotest_common.sh@1057 -- # stubpid=148538 00:28:43.632 01:11:17 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:28:43.632 Waiting for stub to ready for secondary processes... 00:28:43.632 01:11:17 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:28:43.632 01:11:17 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:43.632 01:11:17 -- common/autotest_common.sh@1061 -- # [[ -e /proc/148538 ]] 00:28:43.632 01:11:17 -- common/autotest_common.sh@1062 -- # sleep 1s 00:28:43.632 [2024-11-18 01:11:17.480597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:43.632 [2024-11-18 01:11:17.481535] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.201 01:11:18 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:44.201 01:11:18 -- common/autotest_common.sh@1061 -- # [[ -e /proc/148538 ]] 00:28:44.201 01:11:18 -- common/autotest_common.sh@1062 -- # sleep 1s 00:28:45.140 [2024-11-18 01:11:19.336107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.140 [2024-11-18 01:11:19.377441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.140 [2024-11-18 01:11:19.377626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.140 [2024-11-18 01:11:19.377627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.140 [2024-11-18 01:11:19.385590] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:28:45.140 [2024-11-18 01:11:19.396807] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:28:45.140 [2024-11-18 01:11:19.397874] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:28:45.140 01:11:19 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:45.140 done. 00:28:45.140 01:11:19 -- common/autotest_common.sh@1064 -- # echo done. 00:28:45.140 01:11:19 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:45.140 01:11:19 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:28:45.140 01:11:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:45.140 01:11:19 -- common/autotest_common.sh@10 -- # set +x 00:28:45.140 ************************************ 00:28:45.140 START TEST nvme_reset 00:28:45.140 ************************************ 00:28:45.140 01:11:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:45.399 Initializing NVMe Controllers 00:28:45.399 Skipping QEMU NVMe SSD at 0000:00:06.0 00:28:45.399 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:28:45.399 00:28:45.399 real 0m0.322s 00:28:45.399 user 0m0.117s 00:28:45.399 sys 0m0.138s 00:28:45.399 01:11:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:45.399 ************************************ 00:28:45.399 END TEST nvme_reset 00:28:45.399 ************************************ 00:28:45.399 01:11:19 -- common/autotest_common.sh@10 -- # set +x 00:28:45.659 01:11:19 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:28:45.659 01:11:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:45.659 01:11:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:45.659 01:11:19 -- common/autotest_common.sh@10 -- # set +x 00:28:45.659 ************************************ 00:28:45.659 START TEST nvme_identify 00:28:45.659 ************************************ 00:28:45.659 01:11:19 -- common/autotest_common.sh@1114 -- # nvme_identify 00:28:45.659 01:11:19 -- nvme/nvme.sh@12 -- # bdfs=() 00:28:45.659 01:11:19 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:28:45.659 01:11:19 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:28:45.659 01:11:19 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:28:45.659 01:11:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:28:45.659 01:11:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:28:45.659 01:11:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:45.659 01:11:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:45.659 01:11:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:28:45.659 01:11:19 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:28:45.659 01:11:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:28:45.659 01:11:19 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:28:45.919 [2024-11-18 01:11:20.168016] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 148576 terminated unexpected 00:28:45.919 ===================================================== 00:28:45.919 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:45.919 ===================================================== 00:28:45.919 Controller Capabilities/Features 00:28:45.919 ================================ 00:28:45.919 Vendor ID: 1b36 00:28:45.919 Subsystem Vendor ID: 1af4 00:28:45.919 Serial Number: 12340 00:28:45.919 Model Number: QEMU NVMe Ctrl 00:28:45.919 Firmware Version: 8.0.0 00:28:45.919 Recommended Arb Burst: 6 00:28:45.919 IEEE OUI Identifier: 00 54 52 00:28:45.919 Multi-path I/O 00:28:45.919 May have multiple subsystem ports: No 00:28:45.919 May have multiple controllers: No 00:28:45.919 Associated with SR-IOV VF: No 00:28:45.919 Max Data Transfer Size: 524288 00:28:45.919 Max Number of Namespaces: 256 00:28:45.919 Max Number of I/O Queues: 64 00:28:45.919 NVMe Specification Version (VS): 1.4 00:28:45.919 NVMe Specification Version (Identify): 1.4 00:28:45.919 Maximum Queue Entries: 2048 00:28:45.919 Contiguous Queues Required: Yes 00:28:45.919 Arbitration Mechanisms Supported 00:28:45.919 Weighted Round Robin: Not Supported 00:28:45.919 Vendor Specific: Not Supported 00:28:45.919 Reset Timeout: 7500 ms 00:28:45.919 Doorbell Stride: 4 bytes 00:28:45.919 NVM Subsystem Reset: Not Supported 00:28:45.919 Command Sets Supported 00:28:45.919 NVM Command Set: Supported 00:28:45.919 Boot Partition: Not Supported 00:28:45.919 Memory Page Size Minimum: 4096 bytes 00:28:45.919 Memory Page Size Maximum: 65536 bytes 00:28:45.919 Persistent Memory Region: Not Supported 00:28:45.919 Optional Asynchronous Events Supported 00:28:45.919 Namespace Attribute Notices: Supported 00:28:45.919 Firmware Activation Notices: Not Supported 00:28:45.919 ANA Change Notices: Not Supported 00:28:45.919 PLE Aggregate Log Change Notices: Not Supported 00:28:45.919 LBA Status Info Alert Notices: Not Supported 00:28:45.919 EGE Aggregate Log Change Notices: Not Supported 00:28:45.919 Normal NVM Subsystem Shutdown event: Not Supported 00:28:45.919 Zone Descriptor Change Notices: Not Supported 00:28:45.919 Discovery Log Change Notices: Not Supported 00:28:45.919 Controller Attributes 00:28:45.919 128-bit Host Identifier: Not Supported 00:28:45.919 Non-Operational Permissive Mode: Not Supported 00:28:45.919 NVM Sets: Not Supported 00:28:45.919 Read Recovery Levels: Not Supported 00:28:45.919 Endurance Groups: Not Supported 00:28:45.919 Predictable Latency Mode: Not Supported 00:28:45.919 Traffic Based Keep ALive: Not Supported 00:28:45.919 Namespace Granularity: Not Supported 00:28:45.919 SQ Associations: Not Supported 00:28:45.919 UUID List: Not Supported 00:28:45.919 Multi-Domain Subsystem: Not Supported 00:28:45.919 Fixed Capacity Management: Not Supported 00:28:45.919 Variable Capacity Management: Not Supported 00:28:45.919 Delete Endurance Group: Not Supported 00:28:45.919 Delete NVM Set: Not Supported 00:28:45.919 Extended LBA Formats Supported: Supported 00:28:45.919 Flexible Data Placement Supported: Not Supported 00:28:45.919 00:28:45.919 Controller Memory Buffer Support 00:28:45.919 ================================ 00:28:45.919 Supported: No 00:28:45.919 00:28:45.919 Persistent Memory Region Support 00:28:45.919 ================================ 00:28:45.919 Supported: No 00:28:45.919 00:28:45.919 Admin Command Set Attributes 00:28:45.919 ============================ 00:28:45.919 Security Send/Receive: Not Supported 00:28:45.919 Format NVM: Supported 00:28:45.919 Firmware Activate/Download: Not Supported 00:28:45.919 Namespace Management: Supported 00:28:45.919 Device Self-Test: Not Supported 00:28:45.919 Directives: Supported 00:28:45.919 NVMe-MI: Not Supported 00:28:45.919 Virtualization Management: Not Supported 00:28:45.919 Doorbell Buffer Config: Supported 00:28:45.919 Get LBA Status Capability: Not Supported 00:28:45.919 Command & Feature Lockdown Capability: Not Supported 00:28:45.919 Abort Command Limit: 4 00:28:45.919 Async Event Request Limit: 4 00:28:45.919 Number of Firmware Slots: N/A 00:28:45.919 Firmware Slot 1 Read-Only: N/A 00:28:45.919 Firmware Activation Without Reset: N/A 00:28:45.919 Multiple Update Detection Support: N/A 00:28:45.919 Firmware Update Granularity: No Information Provided 00:28:45.919 Per-Namespace SMART Log: Yes 00:28:45.919 Asymmetric Namespace Access Log Page: Not Supported 00:28:45.920 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:45.920 Command Effects Log Page: Supported 00:28:45.920 Get Log Page Extended Data: Supported 00:28:45.920 Telemetry Log Pages: Not Supported 00:28:45.920 Persistent Event Log Pages: Not Supported 00:28:45.920 Supported Log Pages Log Page: May Support 00:28:45.920 Commands Supported & Effects Log Page: Not Supported 00:28:45.920 Feature Identifiers & Effects Log Page:May Support 00:28:45.920 NVMe-MI Commands & Effects Log Page: May Support 00:28:45.920 Data Area 4 for Telemetry Log: Not Supported 00:28:45.920 Error Log Page Entries Supported: 1 00:28:45.920 Keep Alive: Not Supported 00:28:45.920 00:28:45.920 NVM Command Set Attributes 00:28:45.920 ========================== 00:28:45.920 Submission Queue Entry Size 00:28:45.920 Max: 64 00:28:45.920 Min: 64 00:28:45.920 Completion Queue Entry Size 00:28:45.920 Max: 16 00:28:45.920 Min: 16 00:28:45.920 Number of Namespaces: 256 00:28:45.920 Compare Command: Supported 00:28:45.920 Write Uncorrectable Command: Not Supported 00:28:45.920 Dataset Management Command: Supported 00:28:45.920 Write Zeroes Command: Supported 00:28:45.920 Set Features Save Field: Supported 00:28:45.920 Reservations: Not Supported 00:28:45.920 Timestamp: Supported 00:28:45.920 Copy: Supported 00:28:45.920 Volatile Write Cache: Present 00:28:45.920 Atomic Write Unit (Normal): 1 00:28:45.920 Atomic Write Unit (PFail): 1 00:28:45.920 Atomic Compare & Write Unit: 1 00:28:45.920 Fused Compare & Write: Not Supported 00:28:45.920 Scatter-Gather List 00:28:45.920 SGL Command Set: Supported 00:28:45.920 SGL Keyed: Not Supported 00:28:45.920 SGL Bit Bucket Descriptor: Not Supported 00:28:45.920 SGL Metadata Pointer: Not Supported 00:28:45.920 Oversized SGL: Not Supported 00:28:45.920 SGL Metadata Address: Not Supported 00:28:45.920 SGL Offset: Not Supported 00:28:45.920 Transport SGL Data Block: Not Supported 00:28:45.920 Replay Protected Memory Block: Not Supported 00:28:45.920 00:28:45.920 Firmware Slot Information 00:28:45.920 ========================= 00:28:45.920 Active slot: 1 00:28:45.920 Slot 1 Firmware Revision: 1.0 00:28:45.920 00:28:45.920 00:28:45.920 Commands Supported and Effects 00:28:45.920 ============================== 00:28:45.920 Admin Commands 00:28:45.920 -------------- 00:28:45.920 Delete I/O Submission Queue (00h): Supported 00:28:45.920 Create I/O Submission Queue (01h): Supported 00:28:45.920 Get Log Page (02h): Supported 00:28:45.920 Delete I/O Completion Queue (04h): Supported 00:28:45.920 Create I/O Completion Queue (05h): Supported 00:28:45.920 Identify (06h): Supported 00:28:45.920 Abort (08h): Supported 00:28:45.920 Set Features (09h): Supported 00:28:45.920 Get Features (0Ah): Supported 00:28:45.920 Asynchronous Event Request (0Ch): Supported 00:28:45.920 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:45.920 Directive Send (19h): Supported 00:28:45.920 Directive Receive (1Ah): Supported 00:28:45.920 Virtualization Management (1Ch): Supported 00:28:45.920 Doorbell Buffer Config (7Ch): Supported 00:28:45.920 Format NVM (80h): Supported LBA-Change 00:28:45.920 I/O Commands 00:28:45.920 ------------ 00:28:45.920 Flush (00h): Supported LBA-Change 00:28:45.920 Write (01h): Supported LBA-Change 00:28:45.920 Read (02h): Supported 00:28:45.920 Compare (05h): Supported 00:28:45.920 Write Zeroes (08h): Supported LBA-Change 00:28:45.920 Dataset Management (09h): Supported LBA-Change 00:28:45.920 Unknown (0Ch): Supported 00:28:45.920 Unknown (12h): Supported 00:28:45.920 Copy (19h): Supported LBA-Change 00:28:45.920 Unknown (1Dh): Supported LBA-Change 00:28:45.920 00:28:45.920 Error Log 00:28:45.920 ========= 00:28:45.920 00:28:45.920 Arbitration 00:28:45.920 =========== 00:28:45.920 Arbitration Burst: no limit 00:28:45.920 00:28:45.920 Power Management 00:28:45.920 ================ 00:28:45.920 Number of Power States: 1 00:28:45.920 Current Power State: Power State #0 00:28:45.920 Power State #0: 00:28:45.920 Max Power: 25.00 W 00:28:45.920 Non-Operational State: Operational 00:28:45.920 Entry Latency: 16 microseconds 00:28:45.920 Exit Latency: 4 microseconds 00:28:45.920 Relative Read Throughput: 0 00:28:45.920 Relative Read Latency: 0 00:28:45.920 Relative Write Throughput: 0 00:28:45.920 Relative Write Latency: 0 00:28:45.920 Idle Power: Not Reported 00:28:45.920 Active Power: Not Reported 00:28:45.920 Non-Operational Permissive Mode: Not Supported 00:28:45.920 00:28:45.920 Health Information 00:28:45.920 ================== 00:28:45.920 Critical Warnings: 00:28:45.920 Available Spare Space: OK 00:28:45.920 Temperature: OK 00:28:45.920 Device Reliability: OK 00:28:45.920 Read Only: No 00:28:45.920 Volatile Memory Backup: OK 00:28:45.920 Current Temperature: 323 Kelvin (50 Celsius) 00:28:45.920 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:45.920 Available Spare: 0% 00:28:45.920 Available Spare Threshold: 0% 00:28:45.920 Life Percentage Used: 0% 00:28:45.920 Data Units Read: 7953 00:28:45.920 Data Units Written: 3879 00:28:45.920 Host Read Commands: 345637 00:28:45.920 Host Write Commands: 187983 00:28:45.920 Controller Busy Time: 0 minutes 00:28:45.920 Power Cycles: 0 00:28:45.920 Power On Hours: 0 hours 00:28:45.920 Unsafe Shutdowns: 0 00:28:45.920 Unrecoverable Media Errors: 0 00:28:45.920 Lifetime Error Log Entries: 0 00:28:45.920 Warning Temperature Time: 0 minutes 00:28:45.920 Critical Temperature Time: 0 minutes 00:28:45.920 00:28:45.920 Number of Queues 00:28:45.920 ================ 00:28:45.920 Number of I/O Submission Queues: 64 00:28:45.920 Number of I/O Completion Queues: 64 00:28:45.920 00:28:45.920 ZNS Specific Controller Data 00:28:45.920 ============================ 00:28:45.920 Zone Append Size Limit: 0 00:28:45.920 00:28:45.920 00:28:45.920 Active Namespaces 00:28:45.920 ================= 00:28:45.920 Namespace ID:1 00:28:45.920 Error Recovery Timeout: Unlimited 00:28:45.920 Command Set Identifier: NVM (00h) 00:28:45.920 Deallocate: Supported 00:28:45.920 Deallocated/Unwritten Error: Supported 00:28:45.920 Deallocated Read Value: All 0x00 00:28:45.920 Deallocate in Write Zeroes: Not Supported 00:28:45.920 Deallocated Guard Field: 0xFFFF 00:28:45.920 Flush: Supported 00:28:45.920 Reservation: Not Supported 00:28:45.920 Namespace Sharing Capabilities: Private 00:28:45.920 Size (in LBAs): 1310720 (5GiB) 00:28:45.920 Capacity (in LBAs): 1310720 (5GiB) 00:28:45.920 Utilization (in LBAs): 1310720 (5GiB) 00:28:45.920 Thin Provisioning: Not Supported 00:28:45.920 Per-NS Atomic Units: No 00:28:45.920 Maximum Single Source Range Length: 128 00:28:45.920 Maximum Copy Length: 128 00:28:45.920 Maximum Source Range Count: 128 00:28:45.920 NGUID/EUI64 Never Reused: No 00:28:45.920 Namespace Write Protected: No 00:28:45.920 Number of LBA Formats: 8 00:28:45.920 Current LBA Format: LBA Format #04 00:28:45.920 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:45.920 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:45.920 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:45.920 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:45.920 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:45.920 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:45.920 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:45.920 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:45.920 00:28:45.920 01:11:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:28:45.920 01:11:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:28:46.180 ===================================================== 00:28:46.180 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:46.180 ===================================================== 00:28:46.180 Controller Capabilities/Features 00:28:46.180 ================================ 00:28:46.180 Vendor ID: 1b36 00:28:46.180 Subsystem Vendor ID: 1af4 00:28:46.180 Serial Number: 12340 00:28:46.180 Model Number: QEMU NVMe Ctrl 00:28:46.180 Firmware Version: 8.0.0 00:28:46.180 Recommended Arb Burst: 6 00:28:46.180 IEEE OUI Identifier: 00 54 52 00:28:46.180 Multi-path I/O 00:28:46.180 May have multiple subsystem ports: No 00:28:46.181 May have multiple controllers: No 00:28:46.181 Associated with SR-IOV VF: No 00:28:46.181 Max Data Transfer Size: 524288 00:28:46.181 Max Number of Namespaces: 256 00:28:46.181 Max Number of I/O Queues: 64 00:28:46.181 NVMe Specification Version (VS): 1.4 00:28:46.181 NVMe Specification Version (Identify): 1.4 00:28:46.181 Maximum Queue Entries: 2048 00:28:46.181 Contiguous Queues Required: Yes 00:28:46.181 Arbitration Mechanisms Supported 00:28:46.181 Weighted Round Robin: Not Supported 00:28:46.181 Vendor Specific: Not Supported 00:28:46.181 Reset Timeout: 7500 ms 00:28:46.181 Doorbell Stride: 4 bytes 00:28:46.181 NVM Subsystem Reset: Not Supported 00:28:46.181 Command Sets Supported 00:28:46.181 NVM Command Set: Supported 00:28:46.181 Boot Partition: Not Supported 00:28:46.181 Memory Page Size Minimum: 4096 bytes 00:28:46.181 Memory Page Size Maximum: 65536 bytes 00:28:46.181 Persistent Memory Region: Not Supported 00:28:46.181 Optional Asynchronous Events Supported 00:28:46.181 Namespace Attribute Notices: Supported 00:28:46.181 Firmware Activation Notices: Not Supported 00:28:46.181 ANA Change Notices: Not Supported 00:28:46.181 PLE Aggregate Log Change Notices: Not Supported 00:28:46.181 LBA Status Info Alert Notices: Not Supported 00:28:46.181 EGE Aggregate Log Change Notices: Not Supported 00:28:46.181 Normal NVM Subsystem Shutdown event: Not Supported 00:28:46.181 Zone Descriptor Change Notices: Not Supported 00:28:46.181 Discovery Log Change Notices: Not Supported 00:28:46.181 Controller Attributes 00:28:46.181 128-bit Host Identifier: Not Supported 00:28:46.181 Non-Operational Permissive Mode: Not Supported 00:28:46.181 NVM Sets: Not Supported 00:28:46.181 Read Recovery Levels: Not Supported 00:28:46.181 Endurance Groups: Not Supported 00:28:46.181 Predictable Latency Mode: Not Supported 00:28:46.181 Traffic Based Keep ALive: Not Supported 00:28:46.181 Namespace Granularity: Not Supported 00:28:46.181 SQ Associations: Not Supported 00:28:46.181 UUID List: Not Supported 00:28:46.181 Multi-Domain Subsystem: Not Supported 00:28:46.181 Fixed Capacity Management: Not Supported 00:28:46.181 Variable Capacity Management: Not Supported 00:28:46.181 Delete Endurance Group: Not Supported 00:28:46.181 Delete NVM Set: Not Supported 00:28:46.181 Extended LBA Formats Supported: Supported 00:28:46.181 Flexible Data Placement Supported: Not Supported 00:28:46.181 00:28:46.181 Controller Memory Buffer Support 00:28:46.181 ================================ 00:28:46.181 Supported: No 00:28:46.181 00:28:46.181 Persistent Memory Region Support 00:28:46.181 ================================ 00:28:46.181 Supported: No 00:28:46.181 00:28:46.181 Admin Command Set Attributes 00:28:46.181 ============================ 00:28:46.181 Security Send/Receive: Not Supported 00:28:46.181 Format NVM: Supported 00:28:46.181 Firmware Activate/Download: Not Supported 00:28:46.181 Namespace Management: Supported 00:28:46.181 Device Self-Test: Not Supported 00:28:46.181 Directives: Supported 00:28:46.181 NVMe-MI: Not Supported 00:28:46.181 Virtualization Management: Not Supported 00:28:46.181 Doorbell Buffer Config: Supported 00:28:46.181 Get LBA Status Capability: Not Supported 00:28:46.181 Command & Feature Lockdown Capability: Not Supported 00:28:46.181 Abort Command Limit: 4 00:28:46.181 Async Event Request Limit: 4 00:28:46.181 Number of Firmware Slots: N/A 00:28:46.181 Firmware Slot 1 Read-Only: N/A 00:28:46.181 Firmware Activation Without Reset: N/A 00:28:46.181 Multiple Update Detection Support: N/A 00:28:46.181 Firmware Update Granularity: No Information Provided 00:28:46.181 Per-Namespace SMART Log: Yes 00:28:46.181 Asymmetric Namespace Access Log Page: Not Supported 00:28:46.181 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:46.181 Command Effects Log Page: Supported 00:28:46.181 Get Log Page Extended Data: Supported 00:28:46.181 Telemetry Log Pages: Not Supported 00:28:46.181 Persistent Event Log Pages: Not Supported 00:28:46.181 Supported Log Pages Log Page: May Support 00:28:46.181 Commands Supported & Effects Log Page: Not Supported 00:28:46.181 Feature Identifiers & Effects Log Page:May Support 00:28:46.181 NVMe-MI Commands & Effects Log Page: May Support 00:28:46.181 Data Area 4 for Telemetry Log: Not Supported 00:28:46.181 Error Log Page Entries Supported: 1 00:28:46.181 Keep Alive: Not Supported 00:28:46.181 00:28:46.181 NVM Command Set Attributes 00:28:46.181 ========================== 00:28:46.181 Submission Queue Entry Size 00:28:46.181 Max: 64 00:28:46.181 Min: 64 00:28:46.181 Completion Queue Entry Size 00:28:46.181 Max: 16 00:28:46.181 Min: 16 00:28:46.181 Number of Namespaces: 256 00:28:46.181 Compare Command: Supported 00:28:46.181 Write Uncorrectable Command: Not Supported 00:28:46.181 Dataset Management Command: Supported 00:28:46.181 Write Zeroes Command: Supported 00:28:46.181 Set Features Save Field: Supported 00:28:46.181 Reservations: Not Supported 00:28:46.181 Timestamp: Supported 00:28:46.181 Copy: Supported 00:28:46.181 Volatile Write Cache: Present 00:28:46.181 Atomic Write Unit (Normal): 1 00:28:46.181 Atomic Write Unit (PFail): 1 00:28:46.181 Atomic Compare & Write Unit: 1 00:28:46.181 Fused Compare & Write: Not Supported 00:28:46.181 Scatter-Gather List 00:28:46.181 SGL Command Set: Supported 00:28:46.181 SGL Keyed: Not Supported 00:28:46.181 SGL Bit Bucket Descriptor: Not Supported 00:28:46.181 SGL Metadata Pointer: Not Supported 00:28:46.181 Oversized SGL: Not Supported 00:28:46.181 SGL Metadata Address: Not Supported 00:28:46.181 SGL Offset: Not Supported 00:28:46.181 Transport SGL Data Block: Not Supported 00:28:46.181 Replay Protected Memory Block: Not Supported 00:28:46.181 00:28:46.181 Firmware Slot Information 00:28:46.181 ========================= 00:28:46.181 Active slot: 1 00:28:46.181 Slot 1 Firmware Revision: 1.0 00:28:46.181 00:28:46.181 00:28:46.181 Commands Supported and Effects 00:28:46.181 ============================== 00:28:46.181 Admin Commands 00:28:46.181 -------------- 00:28:46.181 Delete I/O Submission Queue (00h): Supported 00:28:46.181 Create I/O Submission Queue (01h): Supported 00:28:46.181 Get Log Page (02h): Supported 00:28:46.181 Delete I/O Completion Queue (04h): Supported 00:28:46.181 Create I/O Completion Queue (05h): Supported 00:28:46.181 Identify (06h): Supported 00:28:46.181 Abort (08h): Supported 00:28:46.181 Set Features (09h): Supported 00:28:46.181 Get Features (0Ah): Supported 00:28:46.181 Asynchronous Event Request (0Ch): Supported 00:28:46.181 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:46.181 Directive Send (19h): Supported 00:28:46.181 Directive Receive (1Ah): Supported 00:28:46.181 Virtualization Management (1Ch): Supported 00:28:46.181 Doorbell Buffer Config (7Ch): Supported 00:28:46.181 Format NVM (80h): Supported LBA-Change 00:28:46.181 I/O Commands 00:28:46.181 ------------ 00:28:46.181 Flush (00h): Supported LBA-Change 00:28:46.181 Write (01h): Supported LBA-Change 00:28:46.181 Read (02h): Supported 00:28:46.181 Compare (05h): Supported 00:28:46.181 Write Zeroes (08h): Supported LBA-Change 00:28:46.181 Dataset Management (09h): Supported LBA-Change 00:28:46.181 Unknown (0Ch): Supported 00:28:46.181 Unknown (12h): Supported 00:28:46.181 Copy (19h): Supported LBA-Change 00:28:46.181 Unknown (1Dh): Supported LBA-Change 00:28:46.181 00:28:46.181 Error Log 00:28:46.181 ========= 00:28:46.181 00:28:46.181 Arbitration 00:28:46.181 =========== 00:28:46.181 Arbitration Burst: no limit 00:28:46.181 00:28:46.181 Power Management 00:28:46.181 ================ 00:28:46.181 Number of Power States: 1 00:28:46.181 Current Power State: Power State #0 00:28:46.181 Power State #0: 00:28:46.181 Max Power: 25.00 W 00:28:46.181 Non-Operational State: Operational 00:28:46.181 Entry Latency: 16 microseconds 00:28:46.181 Exit Latency: 4 microseconds 00:28:46.181 Relative Read Throughput: 0 00:28:46.181 Relative Read Latency: 0 00:28:46.181 Relative Write Throughput: 0 00:28:46.181 Relative Write Latency: 0 00:28:46.181 Idle Power: Not Reported 00:28:46.181 Active Power: Not Reported 00:28:46.181 Non-Operational Permissive Mode: Not Supported 00:28:46.181 00:28:46.181 Health Information 00:28:46.181 ================== 00:28:46.181 Critical Warnings: 00:28:46.181 Available Spare Space: OK 00:28:46.181 Temperature: OK 00:28:46.181 Device Reliability: OK 00:28:46.181 Read Only: No 00:28:46.181 Volatile Memory Backup: OK 00:28:46.182 Current Temperature: 323 Kelvin (50 Celsius) 00:28:46.182 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:46.182 Available Spare: 0% 00:28:46.182 Available Spare Threshold: 0% 00:28:46.182 Life Percentage Used: 0% 00:28:46.182 Data Units Read: 7953 00:28:46.182 Data Units Written: 3879 00:28:46.182 Host Read Commands: 345637 00:28:46.182 Host Write Commands: 187983 00:28:46.182 Controller Busy Time: 0 minutes 00:28:46.182 Power Cycles: 0 00:28:46.182 Power On Hours: 0 hours 00:28:46.182 Unsafe Shutdowns: 0 00:28:46.182 Unrecoverable Media Errors: 0 00:28:46.182 Lifetime Error Log Entries: 0 00:28:46.182 Warning Temperature Time: 0 minutes 00:28:46.182 Critical Temperature Time: 0 minutes 00:28:46.182 00:28:46.182 Number of Queues 00:28:46.182 ================ 00:28:46.182 Number of I/O Submission Queues: 64 00:28:46.182 Number of I/O Completion Queues: 64 00:28:46.182 00:28:46.182 ZNS Specific Controller Data 00:28:46.182 ============================ 00:28:46.182 Zone Append Size Limit: 0 00:28:46.182 00:28:46.182 00:28:46.182 Active Namespaces 00:28:46.182 ================= 00:28:46.182 Namespace ID:1 00:28:46.182 Error Recovery Timeout: Unlimited 00:28:46.182 Command Set Identifier: NVM (00h) 00:28:46.182 Deallocate: Supported 00:28:46.182 Deallocated/Unwritten Error: Supported 00:28:46.182 Deallocated Read Value: All 0x00 00:28:46.182 Deallocate in Write Zeroes: Not Supported 00:28:46.182 Deallocated Guard Field: 0xFFFF 00:28:46.182 Flush: Supported 00:28:46.182 Reservation: Not Supported 00:28:46.182 Namespace Sharing Capabilities: Private 00:28:46.182 Size (in LBAs): 1310720 (5GiB) 00:28:46.182 Capacity (in LBAs): 1310720 (5GiB) 00:28:46.182 Utilization (in LBAs): 1310720 (5GiB) 00:28:46.182 Thin Provisioning: Not Supported 00:28:46.182 Per-NS Atomic Units: No 00:28:46.182 Maximum Single Source Range Length: 128 00:28:46.182 Maximum Copy Length: 128 00:28:46.182 Maximum Source Range Count: 128 00:28:46.182 NGUID/EUI64 Never Reused: No 00:28:46.182 Namespace Write Protected: No 00:28:46.182 Number of LBA Formats: 8 00:28:46.182 Current LBA Format: LBA Format #04 00:28:46.182 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:46.182 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:46.182 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:46.182 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:46.182 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:46.182 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:46.182 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:46.182 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:46.182 00:28:46.441 00:28:46.442 real 0m0.734s 00:28:46.442 user 0m0.251s 00:28:46.442 sys 0m0.360s 00:28:46.442 01:11:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:46.442 01:11:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.442 ************************************ 00:28:46.442 END TEST nvme_identify 00:28:46.442 ************************************ 00:28:46.442 01:11:20 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:28:46.442 01:11:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:46.442 01:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:46.442 01:11:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.442 ************************************ 00:28:46.442 START TEST nvme_perf 00:28:46.442 ************************************ 00:28:46.442 01:11:20 -- common/autotest_common.sh@1114 -- # nvme_perf 00:28:46.442 01:11:20 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:28:47.825 Initializing NVMe Controllers 00:28:47.825 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:47.825 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:47.825 Initialization complete. Launching workers. 00:28:47.825 ======================================================== 00:28:47.825 Latency(us) 00:28:47.825 Device Information : IOPS MiB/s Average min max 00:28:47.825 PCIE (0000:00:06.0) NSID 1 from core 0: 51840.00 607.50 2468.93 1342.03 6128.43 00:28:47.825 ======================================================== 00:28:47.825 Total : 51840.00 607.50 2468.93 1342.03 6128.43 00:28:47.825 00:28:47.825 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:47.825 ================================================================================= 00:28:47.825 1.00000% : 1490.164us 00:28:47.825 10.00000% : 1708.617us 00:28:47.825 25.00000% : 1981.684us 00:28:47.825 50.00000% : 2465.402us 00:28:47.825 75.00000% : 2917.912us 00:28:47.825 90.00000% : 3183.177us 00:28:47.825 95.00000% : 3370.423us 00:28:47.825 98.00000% : 3776.122us 00:28:47.825 99.00000% : 3963.368us 00:28:47.825 99.50000% : 4306.651us 00:28:47.825 99.90000% : 5055.634us 00:28:47.825 99.99000% : 5991.863us 00:28:47.825 99.99900% : 6147.901us 00:28:47.825 99.99990% : 6147.901us 00:28:47.825 99.99999% : 6147.901us 00:28:47.825 00:28:47.825 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:47.825 ============================================================================== 00:28:47.825 Range in us Cumulative IO count 00:28:47.825 1341.928 - 1349.730: 0.0077% ( 4) 00:28:47.825 1349.730 - 1357.531: 0.0096% ( 1) 00:28:47.825 1357.531 - 1365.333: 0.0135% ( 2) 00:28:47.825 1365.333 - 1373.135: 0.0193% ( 3) 00:28:47.825 1373.135 - 1380.937: 0.0328% ( 7) 00:28:47.825 1380.937 - 1388.739: 0.0482% ( 8) 00:28:47.825 1388.739 - 1396.541: 0.0617% ( 7) 00:28:47.825 1396.541 - 1404.343: 0.0829% ( 11) 00:28:47.825 1404.343 - 1412.145: 0.1003% ( 9) 00:28:47.825 1412.145 - 1419.947: 0.1370% ( 19) 00:28:47.825 1419.947 - 1427.749: 0.1813% ( 23) 00:28:47.825 1427.749 - 1435.550: 0.2238% ( 22) 00:28:47.825 1435.550 - 1443.352: 0.2990% ( 39) 00:28:47.825 1443.352 - 1451.154: 0.3781% ( 41) 00:28:47.825 1451.154 - 1458.956: 0.4880% ( 57) 00:28:47.825 1458.956 - 1466.758: 0.6096% ( 63) 00:28:47.825 1466.758 - 1474.560: 0.7388% ( 67) 00:28:47.825 1474.560 - 1482.362: 0.8835% ( 75) 00:28:47.825 1482.362 - 1490.164: 1.0359% ( 79) 00:28:47.825 1490.164 - 1497.966: 1.2326% ( 102) 00:28:47.825 1497.966 - 1505.768: 1.4468% ( 111) 00:28:47.825 1505.768 - 1513.570: 1.6840% ( 123) 00:28:47.825 1513.570 - 1521.371: 1.9194% ( 122) 00:28:47.825 1521.371 - 1529.173: 2.1856% ( 138) 00:28:47.825 1529.173 - 1536.975: 2.4556% ( 140) 00:28:47.825 1536.975 - 1544.777: 2.7238% ( 139) 00:28:47.825 1544.777 - 1552.579: 3.0112% ( 149) 00:28:47.825 1552.579 - 1560.381: 3.3121% ( 156) 00:28:47.825 1560.381 - 1568.183: 3.6555% ( 178) 00:28:47.825 1568.183 - 1575.985: 3.9429% ( 149) 00:28:47.825 1575.985 - 1583.787: 4.2901% ( 180) 00:28:47.825 1583.787 - 1591.589: 4.6103% ( 166) 00:28:47.825 1591.589 - 1599.390: 4.9691% ( 186) 00:28:47.825 1599.390 - 1607.192: 5.3067% ( 175) 00:28:47.825 1607.192 - 1614.994: 5.6694% ( 188) 00:28:47.825 1614.994 - 1622.796: 6.0069% ( 175) 00:28:47.825 1622.796 - 1630.598: 6.3657% ( 186) 00:28:47.825 1630.598 - 1638.400: 6.7168% ( 182) 00:28:47.825 1638.400 - 1646.202: 7.0737% ( 185) 00:28:47.825 1646.202 - 1654.004: 7.4653% ( 203) 00:28:47.825 1654.004 - 1661.806: 7.7971% ( 172) 00:28:47.825 1661.806 - 1669.608: 8.1636% ( 190) 00:28:47.825 1669.608 - 1677.410: 8.5340% ( 192) 00:28:47.825 1677.410 - 1685.211: 8.9120% ( 196) 00:28:47.825 1685.211 - 1693.013: 9.3152% ( 209) 00:28:47.825 1693.013 - 1700.815: 9.7049% ( 202) 00:28:47.825 1700.815 - 1708.617: 10.1061% ( 208) 00:28:47.825 1708.617 - 1716.419: 10.5170% ( 213) 00:28:47.825 1716.419 - 1724.221: 10.9259% ( 212) 00:28:47.825 1724.221 - 1732.023: 11.3426% ( 216) 00:28:47.825 1732.023 - 1739.825: 11.7458% ( 209) 00:28:47.825 1739.825 - 1747.627: 12.1856% ( 228) 00:28:47.825 1747.627 - 1755.429: 12.5907% ( 210) 00:28:47.825 1755.429 - 1763.230: 13.0382% ( 232) 00:28:47.825 1763.230 - 1771.032: 13.4587% ( 218) 00:28:47.825 1771.032 - 1778.834: 13.8870% ( 222) 00:28:47.825 1778.834 - 1786.636: 14.3345% ( 232) 00:28:47.825 1786.636 - 1794.438: 14.7338% ( 207) 00:28:47.825 1794.438 - 1802.240: 15.1833% ( 233) 00:28:47.825 1802.240 - 1810.042: 15.6346% ( 234) 00:28:47.825 1810.042 - 1817.844: 16.0764% ( 229) 00:28:47.825 1817.844 - 1825.646: 16.5008% ( 220) 00:28:47.825 1825.646 - 1833.448: 16.9290% ( 222) 00:28:47.825 1833.448 - 1841.250: 17.3785% ( 233) 00:28:47.825 1841.250 - 1849.051: 17.8260% ( 232) 00:28:47.825 1849.051 - 1856.853: 18.2388% ( 214) 00:28:47.825 1856.853 - 1864.655: 18.6555% ( 216) 00:28:47.825 1864.655 - 1872.457: 19.1146% ( 238) 00:28:47.826 1872.457 - 1880.259: 19.5255% ( 213) 00:28:47.826 1880.259 - 1888.061: 19.9826% ( 237) 00:28:47.826 1888.061 - 1895.863: 20.3935% ( 213) 00:28:47.826 1895.863 - 1903.665: 20.8063% ( 214) 00:28:47.826 1903.665 - 1911.467: 21.2577% ( 234) 00:28:47.826 1911.467 - 1919.269: 21.6590% ( 208) 00:28:47.826 1919.269 - 1927.070: 22.1046% ( 231) 00:28:47.826 1927.070 - 1934.872: 22.5309% ( 221) 00:28:47.826 1934.872 - 1942.674: 22.9398% ( 212) 00:28:47.826 1942.674 - 1950.476: 23.3719% ( 224) 00:28:47.826 1950.476 - 1958.278: 23.7982% ( 221) 00:28:47.826 1958.278 - 1966.080: 24.2207% ( 219) 00:28:47.826 1966.080 - 1973.882: 24.6566% ( 226) 00:28:47.826 1973.882 - 1981.684: 25.0733% ( 216) 00:28:47.826 1981.684 - 1989.486: 25.4900% ( 216) 00:28:47.826 1989.486 - 1997.288: 25.9414% ( 234) 00:28:47.826 1997.288 - 2012.891: 26.7785% ( 434) 00:28:47.826 2012.891 - 2028.495: 27.6003% ( 426) 00:28:47.826 2028.495 - 2044.099: 28.4201% ( 425) 00:28:47.826 2044.099 - 2059.703: 29.3133% ( 463) 00:28:47.826 2059.703 - 2075.307: 30.1408% ( 429) 00:28:47.826 2075.307 - 2090.910: 30.9510% ( 420) 00:28:47.826 2090.910 - 2106.514: 31.8268% ( 454) 00:28:47.826 2106.514 - 2122.118: 32.6466% ( 425) 00:28:47.826 2122.118 - 2137.722: 33.4877% ( 436) 00:28:47.826 2137.722 - 2153.326: 34.3056% ( 424) 00:28:47.826 2153.326 - 2168.930: 35.1350% ( 430) 00:28:47.826 2168.930 - 2184.533: 35.9471% ( 421) 00:28:47.826 2184.533 - 2200.137: 36.7728% ( 428) 00:28:47.826 2200.137 - 2215.741: 37.6080% ( 433) 00:28:47.826 2215.741 - 2231.345: 38.4201% ( 421) 00:28:47.826 2231.345 - 2246.949: 39.2612% ( 436) 00:28:47.826 2246.949 - 2262.552: 40.0675% ( 418) 00:28:47.826 2262.552 - 2278.156: 40.8893% ( 426) 00:28:47.826 2278.156 - 2293.760: 41.7014% ( 421) 00:28:47.826 2293.760 - 2309.364: 42.5231% ( 426) 00:28:47.826 2309.364 - 2324.968: 43.3179% ( 412) 00:28:47.826 2324.968 - 2340.571: 44.1165% ( 414) 00:28:47.826 2340.571 - 2356.175: 44.9576% ( 436) 00:28:47.826 2356.175 - 2371.779: 45.7870% ( 430) 00:28:47.826 2371.779 - 2387.383: 46.6165% ( 430) 00:28:47.826 2387.383 - 2402.987: 47.4421% ( 428) 00:28:47.826 2402.987 - 2418.590: 48.2523% ( 420) 00:28:47.826 2418.590 - 2434.194: 49.0799% ( 429) 00:28:47.826 2434.194 - 2449.798: 49.9035% ( 427) 00:28:47.826 2449.798 - 2465.402: 50.7176% ( 422) 00:28:47.826 2465.402 - 2481.006: 51.5316% ( 422) 00:28:47.826 2481.006 - 2496.610: 52.3515% ( 425) 00:28:47.826 2496.610 - 2512.213: 53.1674% ( 423) 00:28:47.826 2512.213 - 2527.817: 53.9911% ( 427) 00:28:47.826 2527.817 - 2543.421: 54.8206% ( 430) 00:28:47.826 2543.421 - 2559.025: 55.6308% ( 420) 00:28:47.826 2559.025 - 2574.629: 56.4506% ( 425) 00:28:47.826 2574.629 - 2590.232: 57.2762% ( 428) 00:28:47.826 2590.232 - 2605.836: 58.0903% ( 422) 00:28:47.826 2605.836 - 2621.440: 58.9333% ( 437) 00:28:47.826 2621.440 - 2637.044: 59.7647% ( 431) 00:28:47.826 2637.044 - 2652.648: 60.6096% ( 438) 00:28:47.826 2652.648 - 2668.251: 61.4390% ( 430) 00:28:47.826 2668.251 - 2683.855: 62.3302% ( 462) 00:28:47.826 2683.855 - 2699.459: 63.1501% ( 425) 00:28:47.826 2699.459 - 2715.063: 64.0258% ( 454) 00:28:47.826 2715.063 - 2730.667: 64.8862% ( 446) 00:28:47.826 2730.667 - 2746.270: 65.7485% ( 447) 00:28:47.826 2746.270 - 2761.874: 66.5818% ( 432) 00:28:47.826 2761.874 - 2777.478: 67.4498% ( 450) 00:28:47.826 2777.478 - 2793.082: 68.3160% ( 449) 00:28:47.826 2793.082 - 2808.686: 69.1570% ( 436) 00:28:47.826 2808.686 - 2824.290: 70.0174% ( 446) 00:28:47.826 2824.290 - 2839.893: 70.8738% ( 444) 00:28:47.826 2839.893 - 2855.497: 71.7400% ( 449) 00:28:47.826 2855.497 - 2871.101: 72.6408% ( 467) 00:28:47.826 2871.101 - 2886.705: 73.5050% ( 448) 00:28:47.826 2886.705 - 2902.309: 74.3673% ( 447) 00:28:47.826 2902.309 - 2917.912: 75.2778% ( 472) 00:28:47.826 2917.912 - 2933.516: 76.1767% ( 466) 00:28:47.826 2933.516 - 2949.120: 77.0756% ( 466) 00:28:47.826 2949.120 - 2964.724: 77.9572% ( 457) 00:28:47.826 2964.724 - 2980.328: 78.8503% ( 463) 00:28:47.826 2980.328 - 2995.931: 79.7569% ( 470) 00:28:47.826 2995.931 - 3011.535: 80.6597% ( 468) 00:28:47.826 3011.535 - 3027.139: 81.5625% ( 468) 00:28:47.826 3027.139 - 3042.743: 82.4633% ( 467) 00:28:47.826 3042.743 - 3058.347: 83.3449% ( 457) 00:28:47.826 3058.347 - 3073.950: 84.2477% ( 468) 00:28:47.826 3073.950 - 3089.554: 85.1427% ( 464) 00:28:47.826 3089.554 - 3105.158: 86.0301% ( 460) 00:28:47.826 3105.158 - 3120.762: 86.9078% ( 455) 00:28:47.826 3120.762 - 3136.366: 87.8221% ( 474) 00:28:47.826 3136.366 - 3151.970: 88.6574% ( 433) 00:28:47.826 3151.970 - 3167.573: 89.5139% ( 444) 00:28:47.826 3167.573 - 3183.177: 90.3164% ( 416) 00:28:47.826 3183.177 - 3198.781: 91.0880% ( 400) 00:28:47.826 3198.781 - 3214.385: 91.7361% ( 336) 00:28:47.826 3214.385 - 3229.989: 92.3476% ( 317) 00:28:47.826 3229.989 - 3245.592: 92.8414% ( 256) 00:28:47.826 3245.592 - 3261.196: 93.2870% ( 231) 00:28:47.826 3261.196 - 3276.800: 93.6651% ( 196) 00:28:47.826 3276.800 - 3292.404: 94.0008% ( 174) 00:28:47.826 3292.404 - 3308.008: 94.2959% ( 153) 00:28:47.826 3308.008 - 3323.611: 94.5583% ( 136) 00:28:47.826 3323.611 - 3339.215: 94.7975% ( 124) 00:28:47.826 3339.215 - 3354.819: 94.9942% ( 102) 00:28:47.826 3354.819 - 3370.423: 95.1852% ( 99) 00:28:47.826 3370.423 - 3386.027: 95.3723% ( 97) 00:28:47.826 3386.027 - 3401.630: 95.5421% ( 88) 00:28:47.826 3401.630 - 3417.234: 95.6906% ( 77) 00:28:47.826 3417.234 - 3432.838: 95.8179% ( 66) 00:28:47.827 3432.838 - 3448.442: 95.9510% ( 69) 00:28:47.827 3448.442 - 3464.046: 96.0918% ( 73) 00:28:47.827 3464.046 - 3479.650: 96.2076% ( 60) 00:28:47.827 3479.650 - 3495.253: 96.3098% ( 53) 00:28:47.827 3495.253 - 3510.857: 96.4159% ( 55) 00:28:47.827 3510.857 - 3526.461: 96.5181% ( 53) 00:28:47.827 3526.461 - 3542.065: 96.6184% ( 52) 00:28:47.827 3542.065 - 3557.669: 96.7091% ( 47) 00:28:47.827 3557.669 - 3573.272: 96.7882% ( 41) 00:28:47.827 3573.272 - 3588.876: 96.8904% ( 53) 00:28:47.827 3588.876 - 3604.480: 96.9927% ( 53) 00:28:47.827 3604.480 - 3620.084: 97.0891% ( 50) 00:28:47.827 3620.084 - 3635.688: 97.1817% ( 48) 00:28:47.827 3635.688 - 3651.291: 97.2762% ( 49) 00:28:47.827 3651.291 - 3666.895: 97.3688% ( 48) 00:28:47.827 3666.895 - 3682.499: 97.4653% ( 50) 00:28:47.827 3682.499 - 3698.103: 97.5559% ( 47) 00:28:47.827 3698.103 - 3713.707: 97.6408% ( 44) 00:28:47.827 3713.707 - 3729.310: 97.7392% ( 51) 00:28:47.827 3729.310 - 3744.914: 97.8356% ( 50) 00:28:47.827 3744.914 - 3760.518: 97.9263% ( 47) 00:28:47.827 3760.518 - 3776.122: 98.0208% ( 49) 00:28:47.827 3776.122 - 3791.726: 98.1096% ( 46) 00:28:47.827 3791.726 - 3807.330: 98.1925% ( 43) 00:28:47.827 3807.330 - 3822.933: 98.2851% ( 48) 00:28:47.827 3822.933 - 3838.537: 98.3777% ( 48) 00:28:47.827 3838.537 - 3854.141: 98.4664% ( 46) 00:28:47.827 3854.141 - 3869.745: 98.5494% ( 43) 00:28:47.827 3869.745 - 3885.349: 98.6439% ( 49) 00:28:47.827 3885.349 - 3900.952: 98.7326% ( 46) 00:28:47.827 3900.952 - 3916.556: 98.8175% ( 44) 00:28:47.827 3916.556 - 3932.160: 98.8927% ( 39) 00:28:47.827 3932.160 - 3947.764: 98.9564% ( 33) 00:28:47.827 3947.764 - 3963.368: 99.0316% ( 39) 00:28:47.827 3963.368 - 3978.971: 99.0818% ( 26) 00:28:47.827 3978.971 - 3994.575: 99.1262% ( 23) 00:28:47.827 3994.575 - 4025.783: 99.1975% ( 37) 00:28:47.827 4025.783 - 4056.990: 99.2458% ( 25) 00:28:47.827 4056.990 - 4088.198: 99.2843% ( 20) 00:28:47.827 4088.198 - 4119.406: 99.3248% ( 21) 00:28:47.827 4119.406 - 4150.613: 99.3519% ( 14) 00:28:47.827 4150.613 - 4181.821: 99.3827% ( 16) 00:28:47.827 4181.821 - 4213.029: 99.4155% ( 17) 00:28:47.827 4213.029 - 4244.236: 99.4425% ( 14) 00:28:47.827 4244.236 - 4275.444: 99.4715% ( 15) 00:28:47.827 4275.444 - 4306.651: 99.5004% ( 15) 00:28:47.827 4306.651 - 4337.859: 99.5293% ( 15) 00:28:47.827 4337.859 - 4369.067: 99.5467% ( 9) 00:28:47.827 4369.067 - 4400.274: 99.5660% ( 10) 00:28:47.827 4400.274 - 4431.482: 99.5930% ( 14) 00:28:47.827 4431.482 - 4462.690: 99.6142% ( 11) 00:28:47.827 4462.690 - 4493.897: 99.6335% ( 10) 00:28:47.827 4493.897 - 4525.105: 99.6528% ( 10) 00:28:47.827 4525.105 - 4556.312: 99.6701% ( 9) 00:28:47.827 4556.312 - 4587.520: 99.6914% ( 11) 00:28:47.827 4587.520 - 4618.728: 99.7087% ( 9) 00:28:47.827 4618.728 - 4649.935: 99.7242% ( 8) 00:28:47.827 4649.935 - 4681.143: 99.7396% ( 8) 00:28:47.827 4681.143 - 4712.350: 99.7550% ( 8) 00:28:47.827 4712.350 - 4743.558: 99.7724% ( 9) 00:28:47.827 4743.558 - 4774.766: 99.7897% ( 9) 00:28:47.827 4774.766 - 4805.973: 99.8052% ( 8) 00:28:47.827 4805.973 - 4837.181: 99.8187% ( 7) 00:28:47.827 4837.181 - 4868.389: 99.8360% ( 9) 00:28:47.827 4868.389 - 4899.596: 99.8476% ( 6) 00:28:47.827 4899.596 - 4930.804: 99.8611% ( 7) 00:28:47.827 4930.804 - 4962.011: 99.8727% ( 6) 00:28:47.827 4962.011 - 4993.219: 99.8843% ( 6) 00:28:47.827 4993.219 - 5024.427: 99.8939% ( 5) 00:28:47.827 5024.427 - 5055.634: 99.9035% ( 5) 00:28:47.827 5055.634 - 5086.842: 99.9093% ( 3) 00:28:47.827 5086.842 - 5118.050: 99.9151% ( 3) 00:28:47.827 5118.050 - 5149.257: 99.9190% ( 2) 00:28:47.827 5149.257 - 5180.465: 99.9209% ( 1) 00:28:47.827 5180.465 - 5211.672: 99.9248% ( 2) 00:28:47.827 5211.672 - 5242.880: 99.9267% ( 1) 00:28:47.827 5242.880 - 5274.088: 99.9286% ( 1) 00:28:47.827 5274.088 - 5305.295: 99.9325% ( 2) 00:28:47.827 5305.295 - 5336.503: 99.9344% ( 1) 00:28:47.827 5336.503 - 5367.710: 99.9383% ( 2) 00:28:47.827 5367.710 - 5398.918: 99.9402% ( 1) 00:28:47.827 5398.918 - 5430.126: 99.9421% ( 1) 00:28:47.827 5430.126 - 5461.333: 99.9460% ( 2) 00:28:47.827 5461.333 - 5492.541: 99.9479% ( 1) 00:28:47.827 5492.541 - 5523.749: 99.9518% ( 2) 00:28:47.827 5523.749 - 5554.956: 99.9537% ( 1) 00:28:47.827 5554.956 - 5586.164: 99.9556% ( 1) 00:28:47.827 5586.164 - 5617.371: 99.9595% ( 2) 00:28:47.827 5617.371 - 5648.579: 99.9614% ( 1) 00:28:47.827 5648.579 - 5679.787: 99.9653% ( 2) 00:28:47.827 5679.787 - 5710.994: 99.9672% ( 1) 00:28:47.827 5710.994 - 5742.202: 99.9711% ( 2) 00:28:47.827 5742.202 - 5773.410: 99.9730% ( 1) 00:28:47.827 5773.410 - 5804.617: 99.9749% ( 1) 00:28:47.827 5804.617 - 5835.825: 99.9769% ( 1) 00:28:47.827 5835.825 - 5867.032: 99.9807% ( 2) 00:28:47.827 5867.032 - 5898.240: 99.9826% ( 1) 00:28:47.827 5898.240 - 5929.448: 99.9865% ( 2) 00:28:47.827 5929.448 - 5960.655: 99.9884% ( 1) 00:28:47.827 5960.655 - 5991.863: 99.9904% ( 1) 00:28:47.827 5991.863 - 6023.070: 99.9942% ( 2) 00:28:47.827 6023.070 - 6054.278: 99.9961% ( 1) 00:28:47.827 6054.278 - 6085.486: 99.9981% ( 1) 00:28:47.827 6116.693 - 6147.901: 100.0000% ( 1) 00:28:47.827 00:28:47.827 01:11:21 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:28:49.207 Initializing NVMe Controllers 00:28:49.207 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:49.207 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:49.207 Initialization complete. Launching workers. 00:28:49.207 ======================================================== 00:28:49.207 Latency(us) 00:28:49.207 Device Information : IOPS MiB/s Average min max 00:28:49.207 PCIE (0000:00:06.0) NSID 1 from core 0: 54123.27 634.26 2366.21 1193.58 11575.23 00:28:49.207 ======================================================== 00:28:49.207 Total : 54123.27 634.26 2366.21 1193.58 11575.23 00:28:49.207 00:28:49.207 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:49.207 ================================================================================= 00:28:49.207 1.00000% : 1685.211us 00:28:49.207 10.00000% : 1927.070us 00:28:49.207 25.00000% : 2090.910us 00:28:49.207 50.00000% : 2324.968us 00:28:49.207 75.00000% : 2574.629us 00:28:49.207 90.00000% : 2917.912us 00:28:49.207 95.00000% : 3136.366us 00:28:49.207 98.00000% : 3323.611us 00:28:49.207 99.00000% : 3432.838us 00:28:49.207 99.50000% : 3526.461us 00:28:49.207 99.90000% : 4056.990us 00:28:49.207 99.99000% : 9549.531us 00:28:49.207 99.99900% : 11609.234us 00:28:49.207 99.99990% : 11609.234us 00:28:49.207 99.99999% : 11609.234us 00:28:49.207 00:28:49.207 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:49.207 ============================================================================== 00:28:49.207 Range in us Cumulative IO count 00:28:49.207 1185.890 - 1193.691: 0.0018% ( 1) 00:28:49.207 1240.503 - 1248.305: 0.0037% ( 1) 00:28:49.207 1256.107 - 1263.909: 0.0074% ( 2) 00:28:49.207 1263.909 - 1271.710: 0.0092% ( 1) 00:28:49.207 1279.512 - 1287.314: 0.0148% ( 3) 00:28:49.207 1287.314 - 1295.116: 0.0166% ( 1) 00:28:49.207 1295.116 - 1302.918: 0.0203% ( 2) 00:28:49.207 1310.720 - 1318.522: 0.0240% ( 2) 00:28:49.207 1326.324 - 1334.126: 0.0277% ( 2) 00:28:49.207 1334.126 - 1341.928: 0.0295% ( 1) 00:28:49.207 1349.730 - 1357.531: 0.0351% ( 3) 00:28:49.207 1357.531 - 1365.333: 0.0369% ( 1) 00:28:49.207 1365.333 - 1373.135: 0.0443% ( 4) 00:28:49.207 1373.135 - 1380.937: 0.0480% ( 2) 00:28:49.207 1380.937 - 1388.739: 0.0628% ( 8) 00:28:49.207 1388.739 - 1396.541: 0.0683% ( 3) 00:28:49.207 1396.541 - 1404.343: 0.0757% ( 4) 00:28:49.207 1404.343 - 1412.145: 0.0812% ( 3) 00:28:49.207 1412.145 - 1419.947: 0.0923% ( 6) 00:28:49.207 1419.947 - 1427.749: 0.0960% ( 2) 00:28:49.207 1427.749 - 1435.550: 0.0978% ( 1) 00:28:49.207 1435.550 - 1443.352: 0.1071% ( 5) 00:28:49.207 1443.352 - 1451.154: 0.1108% ( 2) 00:28:49.207 1451.154 - 1458.956: 0.1145% ( 2) 00:28:49.207 1458.956 - 1466.758: 0.1255% ( 6) 00:28:49.207 1466.758 - 1474.560: 0.1385% ( 7) 00:28:49.207 1474.560 - 1482.362: 0.1459% ( 4) 00:28:49.207 1482.362 - 1490.164: 0.1514% ( 3) 00:28:49.207 1490.164 - 1497.966: 0.1532% ( 1) 00:28:49.207 1497.966 - 1505.768: 0.1680% ( 8) 00:28:49.207 1505.768 - 1513.570: 0.1865% ( 10) 00:28:49.207 1513.570 - 1521.371: 0.1957% ( 5) 00:28:49.207 1521.371 - 1529.173: 0.2068% ( 6) 00:28:49.207 1529.173 - 1536.975: 0.2160% ( 5) 00:28:49.207 1536.975 - 1544.777: 0.2289% ( 7) 00:28:49.207 1544.777 - 1552.579: 0.2474% ( 10) 00:28:49.207 1552.579 - 1560.381: 0.2622% ( 8) 00:28:49.207 1560.381 - 1568.183: 0.2843% ( 12) 00:28:49.207 1568.183 - 1575.985: 0.2935% ( 5) 00:28:49.207 1575.985 - 1583.787: 0.3175% ( 13) 00:28:49.207 1583.787 - 1591.589: 0.3489% ( 17) 00:28:49.207 1591.589 - 1599.390: 0.3896% ( 22) 00:28:49.207 1599.390 - 1607.192: 0.4172% ( 15) 00:28:49.207 1607.192 - 1614.994: 0.4634% ( 25) 00:28:49.207 1614.994 - 1622.796: 0.5003% ( 20) 00:28:49.207 1622.796 - 1630.598: 0.5446% ( 24) 00:28:49.207 1630.598 - 1638.400: 0.5834% ( 21) 00:28:49.207 1638.400 - 1646.202: 0.6499% ( 36) 00:28:49.207 1646.202 - 1654.004: 0.7219% ( 39) 00:28:49.207 1654.004 - 1661.806: 0.7883% ( 36) 00:28:49.207 1661.806 - 1669.608: 0.8991% ( 60) 00:28:49.207 1669.608 - 1677.410: 0.9914% ( 50) 00:28:49.208 1677.410 - 1685.211: 1.0985% ( 58) 00:28:49.208 1685.211 - 1693.013: 1.2019% ( 56) 00:28:49.208 1693.013 - 1700.815: 1.3034% ( 55) 00:28:49.208 1700.815 - 1708.617: 1.4271% ( 67) 00:28:49.208 1708.617 - 1716.419: 1.5711% ( 78) 00:28:49.208 1716.419 - 1724.221: 1.7465% ( 95) 00:28:49.208 1724.221 - 1732.023: 1.9441% ( 107) 00:28:49.208 1732.023 - 1739.825: 2.1619% ( 118) 00:28:49.208 1739.825 - 1747.627: 2.3355% ( 94) 00:28:49.208 1747.627 - 1755.429: 2.5312% ( 106) 00:28:49.208 1755.429 - 1763.230: 2.7545% ( 121) 00:28:49.208 1763.230 - 1771.032: 3.1219% ( 199) 00:28:49.208 1771.032 - 1778.834: 3.3416% ( 119) 00:28:49.208 1778.834 - 1786.636: 3.5743% ( 126) 00:28:49.208 1786.636 - 1794.438: 3.7866% ( 115) 00:28:49.208 1794.438 - 1802.240: 4.0543% ( 145) 00:28:49.208 1802.240 - 1810.042: 4.3995% ( 187) 00:28:49.208 1810.042 - 1817.844: 4.6395% ( 130) 00:28:49.208 1817.844 - 1825.646: 4.9054% ( 144) 00:28:49.208 1825.646 - 1833.448: 5.1712% ( 144) 00:28:49.208 1833.448 - 1841.250: 5.4703% ( 162) 00:28:49.208 1841.250 - 1849.051: 5.7657% ( 160) 00:28:49.208 1849.051 - 1856.853: 6.0630% ( 161) 00:28:49.208 1856.853 - 1864.655: 6.3787% ( 171) 00:28:49.208 1864.655 - 1872.457: 6.7442% ( 198) 00:28:49.208 1872.457 - 1880.259: 7.2482% ( 273) 00:28:49.208 1880.259 - 1888.061: 7.6876% ( 238) 00:28:49.208 1888.061 - 1895.863: 8.1086% ( 228) 00:28:49.208 1895.863 - 1903.665: 8.6920% ( 316) 00:28:49.208 1903.665 - 1911.467: 9.1646% ( 256) 00:28:49.208 1911.467 - 1919.269: 9.7240% ( 303) 00:28:49.208 1919.269 - 1927.070: 10.2631% ( 292) 00:28:49.208 1927.070 - 1934.872: 10.8003% ( 291) 00:28:49.208 1934.872 - 1942.674: 11.4428% ( 348) 00:28:49.208 1942.674 - 1950.476: 12.0428% ( 325) 00:28:49.208 1950.476 - 1958.278: 12.6133% ( 309) 00:28:49.208 1958.278 - 1966.080: 13.3629% ( 406) 00:28:49.208 1966.080 - 1973.882: 14.0072% ( 349) 00:28:49.208 1973.882 - 1981.684: 14.7808% ( 419) 00:28:49.208 1981.684 - 1989.486: 15.5156% ( 398) 00:28:49.208 1989.486 - 1997.288: 16.1950% ( 368) 00:28:49.208 1997.288 - 2012.891: 17.5132% ( 714) 00:28:49.208 2012.891 - 2028.495: 18.8646% ( 732) 00:28:49.208 2028.495 - 2044.099: 20.2677% ( 760) 00:28:49.208 2044.099 - 2059.703: 21.8407% ( 852) 00:28:49.208 2059.703 - 2075.307: 23.6149% ( 961) 00:28:49.208 2075.307 - 2090.910: 25.4500% ( 994) 00:28:49.208 2090.910 - 2106.514: 27.2519% ( 976) 00:28:49.208 2106.514 - 2122.118: 28.8397% ( 860) 00:28:49.208 2122.118 - 2137.722: 30.6859% ( 1000) 00:28:49.208 2137.722 - 2153.326: 32.4047% ( 931) 00:28:49.208 2153.326 - 2168.930: 33.9887% ( 858) 00:28:49.208 2168.930 - 2184.533: 35.5580% ( 850) 00:28:49.208 2184.533 - 2200.137: 37.2048% ( 892) 00:28:49.208 2200.137 - 2215.741: 38.8664% ( 900) 00:28:49.208 2215.741 - 2231.345: 40.3563% ( 807) 00:28:49.208 2231.345 - 2246.949: 41.9754% ( 877) 00:28:49.208 2246.949 - 2262.552: 43.6075% ( 884) 00:28:49.208 2262.552 - 2278.156: 45.7214% ( 1145) 00:28:49.208 2278.156 - 2293.760: 47.4661% ( 945) 00:28:49.208 2293.760 - 2309.364: 49.1720% ( 924) 00:28:49.208 2309.364 - 2324.968: 50.9093% ( 941) 00:28:49.208 2324.968 - 2340.571: 52.6816% ( 960) 00:28:49.208 2340.571 - 2356.175: 54.4208% ( 942) 00:28:49.208 2356.175 - 2371.779: 56.1691% ( 947) 00:28:49.208 2371.779 - 2387.383: 57.8252% ( 897) 00:28:49.208 2387.383 - 2402.987: 59.3871% ( 846) 00:28:49.208 2402.987 - 2418.590: 61.0597% ( 906) 00:28:49.208 2418.590 - 2434.194: 62.7582% ( 920) 00:28:49.208 2434.194 - 2449.798: 64.5638% ( 978) 00:28:49.208 2449.798 - 2465.402: 66.4968% ( 1047) 00:28:49.208 2465.402 - 2481.006: 68.3135% ( 984) 00:28:49.208 2481.006 - 2496.610: 69.8754% ( 846) 00:28:49.208 2496.610 - 2512.213: 71.3376% ( 792) 00:28:49.208 2512.213 - 2527.817: 72.6410% ( 706) 00:28:49.208 2527.817 - 2543.421: 73.7986% ( 627) 00:28:49.208 2543.421 - 2559.025: 74.9451% ( 621) 00:28:49.208 2559.025 - 2574.629: 75.9070% ( 521) 00:28:49.208 2574.629 - 2590.232: 76.9408% ( 560) 00:28:49.208 2590.232 - 2605.836: 77.8879% ( 513) 00:28:49.208 2605.836 - 2621.440: 78.9015% ( 549) 00:28:49.208 2621.440 - 2637.044: 79.7341% ( 451) 00:28:49.208 2637.044 - 2652.648: 80.5631% ( 449) 00:28:49.208 2652.648 - 2668.251: 81.2776% ( 387) 00:28:49.208 2668.251 - 2683.855: 81.9588% ( 369) 00:28:49.208 2683.855 - 2699.459: 82.5884% ( 341) 00:28:49.208 2699.459 - 2715.063: 83.2623% ( 365) 00:28:49.208 2715.063 - 2730.667: 83.8493% ( 318) 00:28:49.208 2730.667 - 2746.270: 84.4900% ( 347) 00:28:49.208 2746.270 - 2761.874: 85.0512% ( 304) 00:28:49.208 2761.874 - 2777.478: 85.6125% ( 304) 00:28:49.208 2777.478 - 2793.082: 86.1442% ( 288) 00:28:49.208 2793.082 - 2808.686: 86.6667% ( 283) 00:28:49.208 2808.686 - 2824.290: 87.2113% ( 295) 00:28:49.208 2824.290 - 2839.893: 87.7190% ( 275) 00:28:49.208 2839.893 - 2855.497: 88.2230% ( 273) 00:28:49.208 2855.497 - 2871.101: 88.6790% ( 247) 00:28:49.208 2871.101 - 2886.705: 89.1554% ( 258) 00:28:49.208 2886.705 - 2902.309: 89.6243% ( 254) 00:28:49.208 2902.309 - 2917.912: 90.0766% ( 245) 00:28:49.208 2917.912 - 2933.516: 90.5308% ( 246) 00:28:49.208 2933.516 - 2949.120: 90.9610% ( 233) 00:28:49.208 2949.120 - 2964.724: 91.4022% ( 239) 00:28:49.208 2964.724 - 2980.328: 91.8157% ( 224) 00:28:49.208 2980.328 - 2995.931: 92.1942% ( 205) 00:28:49.208 2995.931 - 3011.535: 92.5912% ( 215) 00:28:49.208 3011.535 - 3027.139: 92.9530% ( 196) 00:28:49.208 3027.139 - 3042.743: 93.3278% ( 203) 00:28:49.208 3042.743 - 3058.347: 93.6767% ( 189) 00:28:49.208 3058.347 - 3073.950: 94.0275% ( 190) 00:28:49.208 3073.950 - 3089.554: 94.3432% ( 171) 00:28:49.208 3089.554 - 3105.158: 94.6497% ( 166) 00:28:49.208 3105.158 - 3120.762: 94.9654% ( 171) 00:28:49.208 3120.762 - 3136.366: 95.2682% ( 164) 00:28:49.208 3136.366 - 3151.970: 95.5414% ( 148) 00:28:49.208 3151.970 - 3167.573: 95.8109% ( 146) 00:28:49.208 3167.573 - 3183.177: 96.1063% ( 160) 00:28:49.208 3183.177 - 3198.781: 96.3648% ( 140) 00:28:49.208 3198.781 - 3214.385: 96.5956% ( 125) 00:28:49.208 3214.385 - 3229.989: 96.8374% ( 131) 00:28:49.208 3229.989 - 3245.592: 97.0516% ( 116) 00:28:49.208 3245.592 - 3261.196: 97.3045% ( 137) 00:28:49.208 3261.196 - 3276.800: 97.4781% ( 94) 00:28:49.208 3276.800 - 3292.404: 97.6572% ( 97) 00:28:49.208 3292.404 - 3308.008: 97.8584% ( 109) 00:28:49.208 3308.008 - 3323.611: 98.0504% ( 104) 00:28:49.208 3323.611 - 3339.215: 98.2147% ( 89) 00:28:49.208 3339.215 - 3354.819: 98.3790% ( 89) 00:28:49.208 3354.819 - 3370.423: 98.5433% ( 89) 00:28:49.208 3370.423 - 3386.027: 98.6670% ( 67) 00:28:49.208 3386.027 - 3401.630: 98.7889% ( 66) 00:28:49.208 3401.630 - 3417.234: 98.9144% ( 68) 00:28:49.208 3417.234 - 3432.838: 99.0197% ( 57) 00:28:49.208 3432.838 - 3448.442: 99.1157% ( 52) 00:28:49.208 3448.442 - 3464.046: 99.2135% ( 53) 00:28:49.208 3464.046 - 3479.650: 99.3003% ( 47) 00:28:49.208 3479.650 - 3495.253: 99.3815% ( 44) 00:28:49.208 3495.253 - 3510.857: 99.4480% ( 36) 00:28:49.208 3510.857 - 3526.461: 99.5144% ( 36) 00:28:49.208 3526.461 - 3542.065: 99.5624% ( 26) 00:28:49.208 3542.065 - 3557.669: 99.6049% ( 23) 00:28:49.208 3557.669 - 3573.272: 99.6511% ( 25) 00:28:49.208 3573.272 - 3588.876: 99.6898% ( 21) 00:28:49.208 3588.876 - 3604.480: 99.7138% ( 13) 00:28:49.208 3604.480 - 3620.084: 99.7452% ( 17) 00:28:49.208 3620.084 - 3635.688: 99.7692% ( 13) 00:28:49.208 3635.688 - 3651.291: 99.7858% ( 9) 00:28:49.208 3651.291 - 3666.895: 99.8061% ( 11) 00:28:49.208 3666.895 - 3682.499: 99.8135% ( 4) 00:28:49.208 3682.499 - 3698.103: 99.8265% ( 7) 00:28:49.208 3698.103 - 3713.707: 99.8375% ( 6) 00:28:49.208 3713.707 - 3729.310: 99.8449% ( 4) 00:28:49.208 3729.310 - 3744.914: 99.8505% ( 3) 00:28:49.208 3744.914 - 3760.518: 99.8578% ( 4) 00:28:49.209 3760.518 - 3776.122: 99.8615% ( 2) 00:28:49.209 3776.122 - 3791.726: 99.8689% ( 4) 00:28:49.209 3791.726 - 3807.330: 99.8708% ( 1) 00:28:49.209 3807.330 - 3822.933: 99.8745% ( 2) 00:28:49.209 3822.933 - 3838.537: 99.8782% ( 2) 00:28:49.209 3838.537 - 3854.141: 99.8800% ( 1) 00:28:49.209 3854.141 - 3869.745: 99.8818% ( 1) 00:28:49.209 3869.745 - 3885.349: 99.8855% ( 2) 00:28:49.209 3885.349 - 3900.952: 99.8874% ( 1) 00:28:49.209 3916.556 - 3932.160: 99.8892% ( 1) 00:28:49.209 3932.160 - 3947.764: 99.8911% ( 1) 00:28:49.209 3947.764 - 3963.368: 99.8929% ( 1) 00:28:49.209 3978.971 - 3994.575: 99.8948% ( 1) 00:28:49.209 3994.575 - 4025.783: 99.8966% ( 1) 00:28:49.209 4025.783 - 4056.990: 99.9003% ( 2) 00:28:49.209 4056.990 - 4088.198: 99.9022% ( 1) 00:28:49.209 4088.198 - 4119.406: 99.9058% ( 2) 00:28:49.209 4119.406 - 4150.613: 99.9077% ( 1) 00:28:49.209 4150.613 - 4181.821: 99.9114% ( 2) 00:28:49.209 4181.821 - 4213.029: 99.9132% ( 1) 00:28:49.209 4400.274 - 4431.482: 99.9169% ( 2) 00:28:49.209 4493.897 - 4525.105: 99.9188% ( 1) 00:28:49.209 4556.312 - 4587.520: 99.9206% ( 1) 00:28:49.209 4587.520 - 4618.728: 99.9262% ( 3) 00:28:49.209 4618.728 - 4649.935: 99.9298% ( 2) 00:28:49.209 4649.935 - 4681.143: 99.9317% ( 1) 00:28:49.209 5617.371 - 5648.579: 99.9335% ( 1) 00:28:49.209 5648.579 - 5679.787: 99.9354% ( 1) 00:28:49.209 5679.787 - 5710.994: 99.9391% ( 2) 00:28:49.209 5710.994 - 5742.202: 99.9409% ( 1) 00:28:49.209 5742.202 - 5773.410: 99.9465% ( 3) 00:28:49.209 5773.410 - 5804.617: 99.9483% ( 1) 00:28:49.209 5804.617 - 5835.825: 99.9502% ( 1) 00:28:49.209 5835.825 - 5867.032: 99.9538% ( 2) 00:28:49.209 5867.032 - 5898.240: 99.9557% ( 1) 00:28:49.209 5898.240 - 5929.448: 99.9575% ( 1) 00:28:49.209 5929.448 - 5960.655: 99.9594% ( 1) 00:28:49.209 5960.655 - 5991.863: 99.9612% ( 1) 00:28:49.209 5991.863 - 6023.070: 99.9649% ( 2) 00:28:49.209 6023.070 - 6054.278: 99.9668% ( 1) 00:28:49.209 6553.600 - 6584.808: 99.9705% ( 2) 00:28:49.209 6584.808 - 6616.015: 99.9723% ( 1) 00:28:49.209 6647.223 - 6678.430: 99.9742% ( 1) 00:28:49.209 6740.846 - 6772.053: 99.9760% ( 1) 00:28:49.209 6772.053 - 6803.261: 99.9778% ( 1) 00:28:49.209 6803.261 - 6834.469: 99.9797% ( 1) 00:28:49.209 6928.091 - 6959.299: 99.9815% ( 1) 00:28:49.209 8238.811 - 8301.227: 99.9834% ( 1) 00:28:49.209 8800.549 - 8862.964: 99.9852% ( 1) 00:28:49.209 9424.701 - 9487.116: 99.9871% ( 1) 00:28:49.209 9487.116 - 9549.531: 99.9908% ( 2) 00:28:49.209 9674.362 - 9736.777: 99.9926% ( 1) 00:28:49.209 11484.404 - 11546.819: 99.9982% ( 3) 00:28:49.209 11546.819 - 11609.234: 100.0000% ( 1) 00:28:49.209 00:28:49.209 01:11:23 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:28:49.209 00:28:49.209 real 0m2.642s 00:28:49.209 user 0m2.228s 00:28:49.209 sys 0m0.272s 00:28:49.209 01:11:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:49.209 ************************************ 00:28:49.209 END TEST nvme_perf 00:28:49.209 ************************************ 00:28:49.209 01:11:23 -- common/autotest_common.sh@10 -- # set +x 00:28:49.209 01:11:23 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:49.209 01:11:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:49.209 01:11:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:49.209 01:11:23 -- common/autotest_common.sh@10 -- # set +x 00:28:49.209 ************************************ 00:28:49.209 START TEST nvme_hello_world 00:28:49.209 ************************************ 00:28:49.209 01:11:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:49.469 Initializing NVMe Controllers 00:28:49.469 Attached to 0000:00:06.0 00:28:49.469 Namespace ID: 1 size: 5GB 00:28:49.469 Initialization complete. 00:28:49.469 INFO: using host memory buffer for IO 00:28:49.469 Hello world! 00:28:49.469 00:28:49.469 real 0m0.284s 00:28:49.469 user 0m0.099s 00:28:49.469 sys 0m0.126s 00:28:49.469 01:11:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:49.469 01:11:23 -- common/autotest_common.sh@10 -- # set +x 00:28:49.469 ************************************ 00:28:49.469 END TEST nvme_hello_world 00:28:49.469 ************************************ 00:28:49.469 01:11:23 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:49.469 01:11:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:49.469 01:11:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:49.469 01:11:23 -- common/autotest_common.sh@10 -- # set +x 00:28:49.469 ************************************ 00:28:49.469 START TEST nvme_sgl 00:28:49.469 ************************************ 00:28:49.469 01:11:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:49.729 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:28:49.729 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:28:49.730 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:28:49.730 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:28:49.730 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:28:49.730 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:28:49.730 NVMe Readv/Writev Request test 00:28:49.730 Attached to 0000:00:06.0 00:28:49.730 0000:00:06.0: build_io_request_2 test passed 00:28:49.730 0000:00:06.0: build_io_request_4 test passed 00:28:49.730 0000:00:06.0: build_io_request_5 test passed 00:28:49.730 0000:00:06.0: build_io_request_6 test passed 00:28:49.730 0000:00:06.0: build_io_request_7 test passed 00:28:49.730 0000:00:06.0: build_io_request_10 test passed 00:28:49.730 Cleaning up... 00:28:49.730 00:28:49.730 real 0m0.328s 00:28:49.730 user 0m0.135s 00:28:49.730 sys 0m0.126s 00:28:49.730 01:11:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:49.730 ************************************ 00:28:49.730 END TEST nvme_sgl 00:28:49.730 ************************************ 00:28:49.730 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:28:49.730 01:11:24 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:49.730 01:11:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:49.730 01:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:49.730 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:28:49.730 ************************************ 00:28:49.730 START TEST nvme_e2edp 00:28:49.730 ************************************ 00:28:49.730 01:11:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:49.990 NVMe Write/Read with End-to-End data protection test 00:28:49.990 Attached to 0000:00:06.0 00:28:49.990 Cleaning up... 00:28:49.990 00:28:49.990 real 0m0.292s 00:28:49.990 user 0m0.098s 00:28:49.990 sys 0m0.129s 00:28:49.990 01:11:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:49.990 ************************************ 00:28:49.990 END TEST nvme_e2edp 00:28:49.990 ************************************ 00:28:49.990 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:28:50.249 01:11:24 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:50.249 01:11:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:50.249 01:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:50.250 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:28:50.250 ************************************ 00:28:50.250 START TEST nvme_reserve 00:28:50.250 ************************************ 00:28:50.250 01:11:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:50.510 ===================================================== 00:28:50.510 NVMe Controller at PCI bus 0, device 6, function 0 00:28:50.510 ===================================================== 00:28:50.510 Reservations: Not Supported 00:28:50.510 Reservation test passed 00:28:50.510 00:28:50.510 real 0m0.295s 00:28:50.510 user 0m0.101s 00:28:50.510 sys 0m0.127s 00:28:50.510 01:11:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:50.510 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:28:50.510 ************************************ 00:28:50.510 END TEST nvme_reserve 00:28:50.510 ************************************ 00:28:50.510 01:11:24 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:50.510 01:11:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:50.510 01:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:50.510 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:28:50.510 ************************************ 00:28:50.510 START TEST nvme_err_injection 00:28:50.510 ************************************ 00:28:50.510 01:11:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:50.770 NVMe Error Injection test 00:28:50.770 Attached to 0000:00:06.0 00:28:50.770 0000:00:06.0: get features failed as expected 00:28:50.770 0000:00:06.0: get features successfully as expected 00:28:50.770 0000:00:06.0: read failed as expected 00:28:50.770 0000:00:06.0: read successfully as expected 00:28:50.770 Cleaning up... 00:28:50.770 00:28:50.770 real 0m0.316s 00:28:50.770 user 0m0.086s 00:28:50.770 sys 0m0.162s 00:28:50.770 01:11:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:50.770 ************************************ 00:28:50.770 END TEST nvme_err_injection 00:28:50.770 ************************************ 00:28:50.770 01:11:25 -- common/autotest_common.sh@10 -- # set +x 00:28:50.770 01:11:25 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:50.770 01:11:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:28:50.770 01:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:50.770 01:11:25 -- common/autotest_common.sh@10 -- # set +x 00:28:50.770 ************************************ 00:28:50.770 START TEST nvme_overhead 00:28:50.770 ************************************ 00:28:50.770 01:11:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:52.150 Initializing NVMe Controllers 00:28:52.150 Attached to 0000:00:06.0 00:28:52.150 Initialization complete. Launching workers. 00:28:52.150 submit (in ns) avg, min, max = 13011.0, 12242.9, 53858.1 00:28:52.150 complete (in ns) avg, min, max = 8735.8, 8240.0, 100836.2 00:28:52.150 00:28:52.150 Submit histogram 00:28:52.150 ================ 00:28:52.150 Range in us Cumulative Count 00:28:52.150 12.190 - 12.251: 0.0140% ( 1) 00:28:52.150 12.312 - 12.373: 0.1679% ( 11) 00:28:52.150 12.373 - 12.434: 0.3779% ( 15) 00:28:52.150 12.434 - 12.495: 0.7838% ( 29) 00:28:52.150 12.495 - 12.556: 2.9391% ( 154) 00:28:52.150 12.556 - 12.617: 10.3289% ( 528) 00:28:52.150 12.617 - 12.678: 23.0931% ( 912) 00:28:52.150 12.678 - 12.739: 37.9986% ( 1065) 00:28:52.150 12.739 - 12.800: 51.4626% ( 962) 00:28:52.150 12.800 - 12.861: 61.3296% ( 705) 00:28:52.150 12.861 - 12.922: 69.2372% ( 565) 00:28:52.150 12.922 - 12.983: 75.9692% ( 481) 00:28:52.150 12.983 - 13.044: 82.0014% ( 431) 00:28:52.150 13.044 - 13.105: 86.8719% ( 348) 00:28:52.150 13.105 - 13.166: 90.5668% ( 264) 00:28:52.150 13.166 - 13.227: 92.8901% ( 166) 00:28:52.150 13.227 - 13.288: 94.2057% ( 94) 00:28:52.150 13.288 - 13.349: 94.9475% ( 53) 00:28:52.150 13.349 - 13.410: 95.2414% ( 21) 00:28:52.150 13.410 - 13.470: 95.5633% ( 23) 00:28:52.150 13.470 - 13.531: 95.7593% ( 14) 00:28:52.150 13.531 - 13.592: 95.8572% ( 7) 00:28:52.150 13.592 - 13.653: 95.9272% ( 5) 00:28:52.150 13.653 - 13.714: 95.9692% ( 3) 00:28:52.150 13.714 - 13.775: 96.0252% ( 4) 00:28:52.150 13.775 - 13.836: 96.0812% ( 4) 00:28:52.150 13.897 - 13.958: 96.1372% ( 4) 00:28:52.150 14.019 - 14.080: 96.1652% ( 2) 00:28:52.150 14.141 - 14.202: 96.2071% ( 3) 00:28:52.150 14.202 - 14.263: 96.2211% ( 1) 00:28:52.150 14.324 - 14.385: 96.2911% ( 5) 00:28:52.150 14.507 - 14.568: 96.3051% ( 1) 00:28:52.150 14.568 - 14.629: 96.3471% ( 3) 00:28:52.150 14.629 - 14.690: 96.3611% ( 1) 00:28:52.150 14.690 - 14.750: 96.4031% ( 3) 00:28:52.150 14.750 - 14.811: 96.4311% ( 2) 00:28:52.150 14.811 - 14.872: 96.4731% ( 3) 00:28:52.150 14.872 - 14.933: 96.5010% ( 2) 00:28:52.150 14.933 - 14.994: 96.5570% ( 4) 00:28:52.150 14.994 - 15.055: 96.6690% ( 8) 00:28:52.150 15.055 - 15.116: 96.7250% ( 4) 00:28:52.150 15.116 - 15.177: 96.8230% ( 7) 00:28:52.150 15.177 - 15.238: 96.9349% ( 8) 00:28:52.150 15.238 - 15.299: 97.0329% ( 7) 00:28:52.150 15.299 - 15.360: 97.1449% ( 8) 00:28:52.150 15.360 - 15.421: 97.2008% ( 4) 00:28:52.150 15.421 - 15.482: 97.2988% ( 7) 00:28:52.150 15.482 - 15.543: 97.3828% ( 6) 00:28:52.150 15.543 - 15.604: 97.5087% ( 9) 00:28:52.150 15.604 - 15.726: 97.6067% ( 7) 00:28:52.150 15.726 - 15.848: 97.7747% ( 12) 00:28:52.150 15.848 - 15.970: 97.9146% ( 10) 00:28:52.150 15.970 - 16.091: 98.0126% ( 7) 00:28:52.150 16.091 - 16.213: 98.1945% ( 13) 00:28:52.150 16.213 - 16.335: 98.3485% ( 11) 00:28:52.150 16.335 - 16.457: 98.4605% ( 8) 00:28:52.150 16.457 - 16.579: 98.5164% ( 4) 00:28:52.150 16.579 - 16.701: 98.5584% ( 3) 00:28:52.150 16.701 - 16.823: 98.6004% ( 3) 00:28:52.150 16.823 - 16.945: 98.6144% ( 1) 00:28:52.150 16.945 - 17.067: 98.6704% ( 4) 00:28:52.150 17.067 - 17.189: 98.6844% ( 1) 00:28:52.150 17.189 - 17.310: 98.7544% ( 5) 00:28:52.150 17.310 - 17.432: 98.7824% ( 2) 00:28:52.150 17.432 - 17.554: 98.8244% ( 3) 00:28:52.150 17.554 - 17.676: 98.8383% ( 1) 00:28:52.150 17.676 - 17.798: 98.8803% ( 3) 00:28:52.150 17.798 - 17.920: 98.9363% ( 4) 00:28:52.150 17.920 - 18.042: 99.0483% ( 8) 00:28:52.150 18.042 - 18.164: 99.0763% ( 2) 00:28:52.150 18.164 - 18.286: 99.1043% ( 2) 00:28:52.150 18.286 - 18.408: 99.1603% ( 4) 00:28:52.150 18.408 - 18.530: 99.1742% ( 1) 00:28:52.150 18.530 - 18.651: 99.1882% ( 1) 00:28:52.150 18.651 - 18.773: 99.2582% ( 5) 00:28:52.150 18.773 - 18.895: 99.2862% ( 2) 00:28:52.150 18.895 - 19.017: 99.3282% ( 3) 00:28:52.150 19.017 - 19.139: 99.3842% ( 4) 00:28:52.150 19.139 - 19.261: 99.3982% ( 1) 00:28:52.150 19.383 - 19.505: 99.4262% ( 2) 00:28:52.150 19.505 - 19.627: 99.4402% ( 1) 00:28:52.150 19.627 - 19.749: 99.4542% ( 1) 00:28:52.150 19.749 - 19.870: 99.4682% ( 1) 00:28:52.150 20.602 - 20.724: 99.4822% ( 1) 00:28:52.150 20.968 - 21.090: 99.4962% ( 1) 00:28:52.150 21.090 - 21.211: 99.5101% ( 1) 00:28:52.150 21.333 - 21.455: 99.5241% ( 1) 00:28:52.150 22.187 - 22.309: 99.5521% ( 2) 00:28:52.150 22.674 - 22.796: 99.5661% ( 1) 00:28:52.150 23.040 - 23.162: 99.5801% ( 1) 00:28:52.150 23.162 - 23.284: 99.6361% ( 4) 00:28:52.150 23.284 - 23.406: 99.7061% ( 5) 00:28:52.150 23.528 - 23.650: 99.7341% ( 2) 00:28:52.150 23.771 - 23.893: 99.7481% ( 1) 00:28:52.150 24.747 - 24.869: 99.7621% ( 1) 00:28:52.150 25.356 - 25.478: 99.7901% ( 2) 00:28:52.150 25.478 - 25.600: 99.8041% ( 1) 00:28:52.150 25.844 - 25.966: 99.8321% ( 2) 00:28:52.150 26.453 - 26.575: 99.8460% ( 1) 00:28:52.150 26.819 - 26.941: 99.8600% ( 1) 00:28:52.150 26.941 - 27.063: 99.8740% ( 1) 00:28:52.150 28.404 - 28.526: 99.8880% ( 1) 00:28:52.150 29.501 - 29.623: 99.9020% ( 1) 00:28:52.150 29.623 - 29.745: 99.9160% ( 1) 00:28:52.150 30.232 - 30.354: 99.9300% ( 1) 00:28:52.150 30.720 - 30.842: 99.9440% ( 1) 00:28:52.150 33.646 - 33.890: 99.9580% ( 1) 00:28:52.150 35.109 - 35.352: 99.9720% ( 1) 00:28:52.150 43.642 - 43.886: 99.9860% ( 1) 00:28:52.150 53.638 - 53.882: 100.0000% ( 1) 00:28:52.150 00:28:52.150 Complete histogram 00:28:52.150 ================== 00:28:52.150 Range in us Cumulative Count 00:28:52.150 8.229 - 8.290: 0.0140% ( 1) 00:28:52.150 8.290 - 8.350: 0.3919% ( 27) 00:28:52.150 8.350 - 8.411: 5.7943% ( 386) 00:28:52.150 8.411 - 8.472: 23.2330% ( 1246) 00:28:52.150 8.472 - 8.533: 44.6466% ( 1530) 00:28:52.150 8.533 - 8.594: 63.0511% ( 1315) 00:28:52.150 8.594 - 8.655: 75.5353% ( 892) 00:28:52.150 8.655 - 8.716: 83.2750% ( 553) 00:28:52.151 8.716 - 8.777: 88.2575% ( 356) 00:28:52.151 8.777 - 8.838: 91.2806% ( 216) 00:28:52.151 8.838 - 8.899: 93.4920% ( 158) 00:28:52.151 8.899 - 8.960: 94.8076% ( 94) 00:28:52.151 8.960 - 9.021: 95.4234% ( 44) 00:28:52.151 9.021 - 9.082: 95.6753% ( 18) 00:28:52.151 9.082 - 9.143: 95.8572% ( 13) 00:28:52.151 9.143 - 9.204: 95.9552% ( 7) 00:28:52.151 9.204 - 9.265: 96.0532% ( 7) 00:28:52.151 9.265 - 9.326: 96.1931% ( 10) 00:28:52.151 9.326 - 9.387: 96.3611% ( 12) 00:28:52.151 9.387 - 9.448: 96.4031% ( 3) 00:28:52.151 9.448 - 9.509: 96.4451% ( 3) 00:28:52.151 9.509 - 9.570: 96.5010% ( 4) 00:28:52.151 9.570 - 9.630: 96.5570% ( 4) 00:28:52.151 9.630 - 9.691: 96.5850% ( 2) 00:28:52.151 9.752 - 9.813: 96.6410% ( 4) 00:28:52.151 9.813 - 9.874: 96.6550% ( 1) 00:28:52.151 9.874 - 9.935: 96.7110% ( 4) 00:28:52.151 9.935 - 9.996: 96.8090% ( 7) 00:28:52.151 9.996 - 10.057: 96.8649% ( 4) 00:28:52.151 10.057 - 10.118: 96.8929% ( 2) 00:28:52.151 10.118 - 10.179: 96.9069% ( 1) 00:28:52.151 10.179 - 10.240: 96.9209% ( 1) 00:28:52.151 10.240 - 10.301: 96.9349% ( 1) 00:28:52.151 10.301 - 10.362: 96.9489% ( 1) 00:28:52.151 10.362 - 10.423: 96.9629% ( 1) 00:28:52.151 10.423 - 10.484: 96.9909% ( 2) 00:28:52.151 10.545 - 10.606: 97.0749% ( 6) 00:28:52.151 10.606 - 10.667: 97.1309% ( 4) 00:28:52.151 10.667 - 10.728: 97.2428% ( 8) 00:28:52.151 10.728 - 10.789: 97.3408% ( 7) 00:28:52.151 10.789 - 10.850: 97.4108% ( 5) 00:28:52.151 10.850 - 10.910: 97.5647% ( 11) 00:28:52.151 10.910 - 10.971: 97.6207% ( 4) 00:28:52.151 10.971 - 11.032: 97.7467% ( 9) 00:28:52.151 11.032 - 11.093: 97.8167% ( 5) 00:28:52.151 11.093 - 11.154: 97.8726% ( 4) 00:28:52.151 11.154 - 11.215: 97.9006% ( 2) 00:28:52.151 11.215 - 11.276: 97.9146% ( 1) 00:28:52.151 11.276 - 11.337: 97.9706% ( 4) 00:28:52.151 11.337 - 11.398: 97.9846% ( 1) 00:28:52.151 11.398 - 11.459: 98.0546% ( 5) 00:28:52.151 11.459 - 11.520: 98.1386% ( 6) 00:28:52.151 11.520 - 11.581: 98.1666% ( 2) 00:28:52.151 11.581 - 11.642: 98.2365% ( 5) 00:28:52.151 11.642 - 11.703: 98.2925% ( 4) 00:28:52.151 11.703 - 11.764: 98.3065% ( 1) 00:28:52.151 11.764 - 11.825: 98.3765% ( 5) 00:28:52.151 11.825 - 11.886: 98.4185% ( 3) 00:28:52.151 11.886 - 11.947: 98.4745% ( 4) 00:28:52.151 11.947 - 12.008: 98.4885% ( 1) 00:28:52.151 12.008 - 12.069: 98.5444% ( 4) 00:28:52.151 12.069 - 12.130: 98.5864% ( 3) 00:28:52.151 12.130 - 12.190: 98.6284% ( 3) 00:28:52.151 12.190 - 12.251: 98.6564% ( 2) 00:28:52.151 12.251 - 12.312: 98.6984% ( 3) 00:28:52.151 12.312 - 12.373: 98.7404% ( 3) 00:28:52.151 12.373 - 12.434: 98.7964% ( 4) 00:28:52.151 12.434 - 12.495: 98.8383% ( 3) 00:28:52.151 12.495 - 12.556: 98.8663% ( 2) 00:28:52.151 12.556 - 12.617: 98.8943% ( 2) 00:28:52.151 12.617 - 12.678: 98.9083% ( 1) 00:28:52.151 12.678 - 12.739: 98.9363% ( 2) 00:28:52.151 12.739 - 12.800: 98.9643% ( 2) 00:28:52.151 12.800 - 12.861: 98.9923% ( 2) 00:28:52.151 12.922 - 12.983: 99.0203% ( 2) 00:28:52.151 13.044 - 13.105: 99.0483% ( 2) 00:28:52.151 13.105 - 13.166: 99.0623% ( 1) 00:28:52.151 13.288 - 13.349: 99.0903% ( 2) 00:28:52.151 13.470 - 13.531: 99.1463% ( 4) 00:28:52.151 13.531 - 13.592: 99.1742% ( 2) 00:28:52.151 13.653 - 13.714: 99.2022% ( 2) 00:28:52.151 13.714 - 13.775: 99.2162% ( 1) 00:28:52.151 13.775 - 13.836: 99.2442% ( 2) 00:28:52.151 13.836 - 13.897: 99.2582% ( 1) 00:28:52.151 13.897 - 13.958: 99.2722% ( 1) 00:28:52.151 13.958 - 14.019: 99.2862% ( 1) 00:28:52.151 14.019 - 14.080: 99.3002% ( 1) 00:28:52.151 14.080 - 14.141: 99.3142% ( 1) 00:28:52.151 14.141 - 14.202: 99.3282% ( 1) 00:28:52.151 14.202 - 14.263: 99.3562% ( 2) 00:28:52.151 14.263 - 14.324: 99.4122% ( 4) 00:28:52.151 14.324 - 14.385: 99.4262% ( 1) 00:28:52.151 14.385 - 14.446: 99.4402% ( 1) 00:28:52.151 14.446 - 14.507: 99.4682% ( 2) 00:28:52.151 14.507 - 14.568: 99.4822% ( 1) 00:28:52.151 14.568 - 14.629: 99.4962% ( 1) 00:28:52.151 14.690 - 14.750: 99.5101% ( 1) 00:28:52.151 14.750 - 14.811: 99.5241% ( 1) 00:28:52.151 14.872 - 14.933: 99.5381% ( 1) 00:28:52.151 15.055 - 15.116: 99.5521% ( 1) 00:28:52.151 15.238 - 15.299: 99.5661% ( 1) 00:28:52.151 16.335 - 16.457: 99.5801% ( 1) 00:28:52.151 16.823 - 16.945: 99.5941% ( 1) 00:28:52.151 17.067 - 17.189: 99.6081% ( 1) 00:28:52.151 17.798 - 17.920: 99.6221% ( 1) 00:28:52.151 18.286 - 18.408: 99.6361% ( 1) 00:28:52.151 18.773 - 18.895: 99.6921% ( 4) 00:28:52.151 18.895 - 19.017: 99.7201% ( 2) 00:28:52.151 19.017 - 19.139: 99.7761% ( 4) 00:28:52.151 19.139 - 19.261: 99.8041% ( 2) 00:28:52.151 19.261 - 19.383: 99.8460% ( 3) 00:28:52.151 19.383 - 19.505: 99.8740% ( 2) 00:28:52.151 20.724 - 20.846: 99.8880% ( 1) 00:28:52.151 20.968 - 21.090: 99.9020% ( 1) 00:28:52.151 21.333 - 21.455: 99.9160% ( 1) 00:28:52.151 24.747 - 24.869: 99.9300% ( 1) 00:28:52.151 27.916 - 28.038: 99.9440% ( 1) 00:28:52.151 30.232 - 30.354: 99.9580% ( 1) 00:28:52.151 38.522 - 38.766: 99.9720% ( 1) 00:28:52.151 41.204 - 41.448: 99.9860% ( 1) 00:28:52.151 100.450 - 100.937: 100.0000% ( 1) 00:28:52.151 00:28:52.151 00:28:52.151 real 0m1.314s 00:28:52.151 user 0m1.135s 00:28:52.151 sys 0m0.101s 00:28:52.151 01:11:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:52.151 01:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:52.151 ************************************ 00:28:52.151 END TEST nvme_overhead 00:28:52.151 ************************************ 00:28:52.151 01:11:26 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:52.151 01:11:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:28:52.151 01:11:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:52.151 01:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:52.151 ************************************ 00:28:52.151 START TEST nvme_arbitration 00:28:52.151 ************************************ 00:28:52.151 01:11:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:56.337 Initializing NVMe Controllers 00:28:56.337 Attached to 0000:00:06.0 00:28:56.337 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:28:56.337 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:28:56.337 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:28:56.337 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:28:56.337 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:28:56.337 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:28:56.337 Initialization complete. Launching workers. 00:28:56.337 Starting thread on core 1 with urgent priority queue 00:28:56.337 Starting thread on core 2 with urgent priority queue 00:28:56.337 Starting thread on core 0 with urgent priority queue 00:28:56.337 Starting thread on core 3 with urgent priority queue 00:28:56.337 QEMU NVMe Ctrl (12340 ) core 0: 6828.67 IO/s 14.64 secs/100000 ios 00:28:56.337 QEMU NVMe Ctrl (12340 ) core 1: 6745.00 IO/s 14.83 secs/100000 ios 00:28:56.337 QEMU NVMe Ctrl (12340 ) core 2: 3919.67 IO/s 25.51 secs/100000 ios 00:28:56.337 QEMU NVMe Ctrl (12340 ) core 3: 4120.67 IO/s 24.27 secs/100000 ios 00:28:56.337 ======================================================== 00:28:56.337 00:28:56.337 00:28:56.337 real 0m3.374s 00:28:56.337 user 0m9.187s 00:28:56.337 sys 0m0.132s 00:28:56.337 01:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:56.337 ************************************ 00:28:56.337 END TEST nvme_arbitration 00:28:56.337 ************************************ 00:28:56.337 01:11:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.337 01:11:29 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:56.337 01:11:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:28:56.337 01:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.337 01:11:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.337 ************************************ 00:28:56.337 START TEST nvme_single_aen 00:28:56.337 ************************************ 00:28:56.337 01:11:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:56.337 [2024-11-18 01:11:29.978142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:56.337 [2024-11-18 01:11:29.978415] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.337 [2024-11-18 01:11:30.159611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:56.337 Asynchronous Event Request test 00:28:56.337 Attached to 0000:00:06.0 00:28:56.337 Reset controller to setup AER completions for this process 00:28:56.337 Registering asynchronous event callbacks... 00:28:56.337 Getting orig temperature thresholds of all controllers 00:28:56.337 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:56.337 Setting all controllers temperature threshold low to trigger AER 00:28:56.337 Waiting for all controllers temperature threshold to be set lower 00:28:56.337 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:56.337 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:28:56.338 Waiting for all controllers to trigger AER and reset threshold 00:28:56.338 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:56.338 Cleaning up... 00:28:56.338 00:28:56.338 real 0m0.261s 00:28:56.338 user 0m0.074s 00:28:56.338 sys 0m0.108s 00:28:56.338 01:11:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:56.338 ************************************ 00:28:56.338 END TEST nvme_single_aen 00:28:56.338 ************************************ 00:28:56.338 01:11:30 -- common/autotest_common.sh@10 -- # set +x 00:28:56.338 01:11:30 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:28:56.338 01:11:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.338 01:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.338 01:11:30 -- common/autotest_common.sh@10 -- # set +x 00:28:56.338 ************************************ 00:28:56.338 START TEST nvme_doorbell_aers 00:28:56.338 ************************************ 00:28:56.338 01:11:30 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:28:56.338 01:11:30 -- nvme/nvme.sh@70 -- # bdfs=() 00:28:56.338 01:11:30 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:28:56.338 01:11:30 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:28:56.338 01:11:30 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:28:56.338 01:11:30 -- common/autotest_common.sh@1508 -- # bdfs=() 00:28:56.338 01:11:30 -- common/autotest_common.sh@1508 -- # local bdfs 00:28:56.338 01:11:30 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:56.338 01:11:30 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:56.338 01:11:30 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:28:56.338 01:11:30 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:28:56.338 01:11:30 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:28:56.338 01:11:30 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:28:56.338 01:11:30 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:28:56.338 [2024-11-18 01:11:30.582349] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148983) is not found. Dropping the request. 00:29:06.317 Executing: test_write_invalid_db 00:29:06.317 Waiting for AER completion... 00:29:06.317 Failure: test_write_invalid_db 00:29:06.317 00:29:06.317 Executing: test_invalid_db_write_overflow_sq 00:29:06.317 Waiting for AER completion... 00:29:06.317 Failure: test_invalid_db_write_overflow_sq 00:29:06.317 00:29:06.317 Executing: test_invalid_db_write_overflow_cq 00:29:06.317 Waiting for AER completion... 00:29:06.317 Failure: test_invalid_db_write_overflow_cq 00:29:06.317 00:29:06.317 00:29:06.317 real 0m10.104s 00:29:06.317 user 0m7.548s 00:29:06.317 sys 0m2.477s 00:29:06.317 01:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:06.317 ************************************ 00:29:06.317 END TEST nvme_doorbell_aers 00:29:06.317 ************************************ 00:29:06.317 01:11:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.317 01:11:40 -- nvme/nvme.sh@97 -- # uname 00:29:06.317 01:11:40 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:29:06.317 01:11:40 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:06.317 01:11:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:29:06.317 01:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:06.317 01:11:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.317 ************************************ 00:29:06.317 START TEST nvme_multi_aen 00:29:06.317 ************************************ 00:29:06.317 01:11:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:06.317 [2024-11-18 01:11:40.468464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:06.318 [2024-11-18 01:11:40.468639] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.318 [2024-11-18 01:11:40.688202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:06.318 [2024-11-18 01:11:40.688261] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148983) is not found. Dropping the request. 00:29:06.318 [2024-11-18 01:11:40.688343] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148983) is not found. Dropping the request. 00:29:06.318 [2024-11-18 01:11:40.688383] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148983) is not found. Dropping the request. 00:29:06.318 [2024-11-18 01:11:40.695737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:06.318 Child process pid: 149173 00:29:06.318 [2024-11-18 01:11:40.696087] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.886 [Child] Asynchronous Event Request test 00:29:06.886 [Child] Attached to 0000:00:06.0 00:29:06.886 [Child] Registering asynchronous event callbacks... 00:29:06.886 [Child] Getting orig temperature thresholds of all controllers 00:29:06.886 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:06.886 [Child] Waiting for all controllers to trigger AER and reset threshold 00:29:06.886 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:06.886 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:06.886 [Child] Cleaning up... 00:29:06.886 Asynchronous Event Request test 00:29:06.886 Attached to 0000:00:06.0 00:29:06.886 Reset controller to setup AER completions for this process 00:29:06.886 Registering asynchronous event callbacks... 00:29:06.886 Getting orig temperature thresholds of all controllers 00:29:06.886 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:06.886 Setting all controllers temperature threshold low to trigger AER 00:29:06.886 Waiting for all controllers temperature threshold to be set lower 00:29:06.886 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:06.886 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:29:06.886 Waiting for all controllers to trigger AER and reset threshold 00:29:06.886 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:06.886 Cleaning up... 00:29:06.886 00:29:06.886 real 0m0.713s 00:29:06.886 user 0m0.259s 00:29:06.886 sys 0m0.284s 00:29:06.886 01:11:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:06.886 ************************************ 00:29:06.886 END TEST nvme_multi_aen 00:29:06.886 ************************************ 00:29:06.886 01:11:41 -- common/autotest_common.sh@10 -- # set +x 00:29:06.886 01:11:41 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:29:06.886 01:11:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:29:06.886 01:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:06.886 01:11:41 -- common/autotest_common.sh@10 -- # set +x 00:29:06.886 ************************************ 00:29:06.886 START TEST nvme_startup 00:29:06.886 ************************************ 00:29:06.886 01:11:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:29:07.144 Initializing NVMe Controllers 00:29:07.144 Attached to 0000:00:06.0 00:29:07.144 Initialization complete. 00:29:07.144 Time used:198539.938 (us). 00:29:07.144 00:29:07.144 real 0m0.287s 00:29:07.144 user 0m0.105s 00:29:07.144 sys 0m0.121s 00:29:07.144 01:11:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:07.144 ************************************ 00:29:07.144 END TEST nvme_startup 00:29:07.144 ************************************ 00:29:07.144 01:11:41 -- common/autotest_common.sh@10 -- # set +x 00:29:07.144 01:11:41 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:29:07.144 01:11:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:07.144 01:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.144 01:11:41 -- common/autotest_common.sh@10 -- # set +x 00:29:07.144 ************************************ 00:29:07.144 START TEST nvme_multi_secondary 00:29:07.144 ************************************ 00:29:07.144 01:11:41 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:29:07.403 01:11:41 -- nvme/nvme.sh@52 -- # pid0=149244 00:29:07.403 01:11:41 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:29:07.403 01:11:41 -- nvme/nvme.sh@54 -- # pid1=149247 00:29:07.403 01:11:41 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:29:07.403 01:11:41 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:29:11.590 Initializing NVMe Controllers 00:29:11.590 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:11.590 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:11.590 Initialization complete. Launching workers. 00:29:11.590 ======================================================== 00:29:11.590 Latency(us) 00:29:11.590 Device Information : IOPS MiB/s Average min max 00:29:11.590 PCIE (0000:00:06.0) NSID 1 from core 2: 14669.76 57.30 1090.51 172.46 20897.94 00:29:11.590 ======================================================== 00:29:11.590 Total : 14669.76 57.30 1090.51 172.46 20897.94 00:29:11.590 00:29:11.590 01:11:45 -- nvme/nvme.sh@56 -- # wait 149244 00:29:11.590 Initializing NVMe Controllers 00:29:11.590 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:11.590 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:29:11.590 Initialization complete. Launching workers. 00:29:11.590 ======================================================== 00:29:11.590 Latency(us) 00:29:11.590 Device Information : IOPS MiB/s Average min max 00:29:11.590 PCIE (0000:00:06.0) NSID 1 from core 1: 35584.00 139.00 449.34 173.37 1281.33 00:29:11.590 ======================================================== 00:29:11.590 Total : 35584.00 139.00 449.34 173.37 1281.33 00:29:11.590 00:29:12.525 Initializing NVMe Controllers 00:29:12.525 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:12.525 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:12.525 Initialization complete. Launching workers. 00:29:12.525 ======================================================== 00:29:12.525 Latency(us) 00:29:12.525 Device Information : IOPS MiB/s Average min max 00:29:12.525 PCIE (0000:00:06.0) NSID 1 from core 0: 43185.10 168.69 370.21 139.48 7351.85 00:29:12.525 ======================================================== 00:29:12.525 Total : 43185.10 168.69 370.21 139.48 7351.85 00:29:12.525 00:29:12.525 01:11:46 -- nvme/nvme.sh@57 -- # wait 149247 00:29:12.525 01:11:46 -- nvme/nvme.sh@61 -- # pid0=149323 00:29:12.525 01:11:46 -- nvme/nvme.sh@63 -- # pid1=149324 00:29:12.525 01:11:46 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:29:12.525 01:11:46 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:29:12.525 01:11:46 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:29:16.710 Initializing NVMe Controllers 00:29:16.710 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:16.710 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:29:16.710 Initialization complete. Launching workers. 00:29:16.710 ======================================================== 00:29:16.710 Latency(us) 00:29:16.710 Device Information : IOPS MiB/s Average min max 00:29:16.710 PCIE (0000:00:06.0) NSID 1 from core 1: 34170.67 133.48 467.95 168.92 1575.55 00:29:16.710 ======================================================== 00:29:16.710 Total : 34170.67 133.48 467.95 168.92 1575.55 00:29:16.710 00:29:16.710 Initializing NVMe Controllers 00:29:16.710 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:16.710 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:16.710 Initialization complete. Launching workers. 00:29:16.710 ======================================================== 00:29:16.710 Latency(us) 00:29:16.710 Device Information : IOPS MiB/s Average min max 00:29:16.710 PCIE (0000:00:06.0) NSID 1 from core 0: 34201.28 133.60 467.55 166.41 6889.14 00:29:16.710 ======================================================== 00:29:16.710 Total : 34201.28 133.60 467.55 166.41 6889.14 00:29:16.710 00:29:18.089 Initializing NVMe Controllers 00:29:18.089 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:18.089 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:18.089 Initialization complete. Launching workers. 00:29:18.089 ======================================================== 00:29:18.089 Latency(us) 00:29:18.089 Device Information : IOPS MiB/s Average min max 00:29:18.089 PCIE (0000:00:06.0) NSID 1 from core 2: 17680.92 69.07 904.31 155.01 20419.75 00:29:18.089 ======================================================== 00:29:18.089 Total : 17680.92 69.07 904.31 155.01 20419.75 00:29:18.089 00:29:18.089 01:11:52 -- nvme/nvme.sh@65 -- # wait 149323 00:29:18.089 01:11:52 -- nvme/nvme.sh@66 -- # wait 149324 00:29:18.089 00:29:18.089 real 0m10.704s 00:29:18.089 user 0m18.630s 00:29:18.089 sys 0m0.827s 00:29:18.089 01:11:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:18.089 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:18.089 ************************************ 00:29:18.089 END TEST nvme_multi_secondary 00:29:18.089 ************************************ 00:29:18.089 01:11:52 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:29:18.089 01:11:52 -- nvme/nvme.sh@102 -- # kill_stub 00:29:18.089 01:11:52 -- common/autotest_common.sh@1075 -- # [[ -e /proc/148538 ]] 00:29:18.089 01:11:52 -- common/autotest_common.sh@1076 -- # kill 148538 00:29:18.089 01:11:52 -- common/autotest_common.sh@1077 -- # wait 148538 00:29:18.657 [2024-11-18 01:11:52.947152] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149172) is not found. Dropping the request. 00:29:18.657 [2024-11-18 01:11:52.947342] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149172) is not found. Dropping the request. 00:29:18.657 [2024-11-18 01:11:52.947441] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149172) is not found. Dropping the request. 00:29:18.657 [2024-11-18 01:11:52.947525] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149172) is not found. Dropping the request. 00:29:18.915 01:11:53 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:29:18.915 01:11:53 -- common/autotest_common.sh@1083 -- # echo 2 00:29:18.915 01:11:53 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:18.915 01:11:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:18.915 01:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:18.915 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:18.915 ************************************ 00:29:18.915 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:18.915 ************************************ 00:29:18.915 01:11:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:18.915 * Looking for test storage... 00:29:18.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:18.915 01:11:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:18.915 01:11:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:18.915 01:11:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:19.174 01:11:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:19.174 01:11:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:19.174 01:11:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:19.174 01:11:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:19.174 01:11:53 -- scripts/common.sh@335 -- # IFS=.-: 00:29:19.174 01:11:53 -- scripts/common.sh@335 -- # read -ra ver1 00:29:19.174 01:11:53 -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.174 01:11:53 -- scripts/common.sh@336 -- # read -ra ver2 00:29:19.174 01:11:53 -- scripts/common.sh@337 -- # local 'op=<' 00:29:19.174 01:11:53 -- scripts/common.sh@339 -- # ver1_l=2 00:29:19.174 01:11:53 -- scripts/common.sh@340 -- # ver2_l=1 00:29:19.174 01:11:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:19.174 01:11:53 -- scripts/common.sh@343 -- # case "$op" in 00:29:19.174 01:11:53 -- scripts/common.sh@344 -- # : 1 00:29:19.174 01:11:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:19.174 01:11:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.174 01:11:53 -- scripts/common.sh@364 -- # decimal 1 00:29:19.174 01:11:53 -- scripts/common.sh@352 -- # local d=1 00:29:19.174 01:11:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.174 01:11:53 -- scripts/common.sh@354 -- # echo 1 00:29:19.174 01:11:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:19.174 01:11:53 -- scripts/common.sh@365 -- # decimal 2 00:29:19.174 01:11:53 -- scripts/common.sh@352 -- # local d=2 00:29:19.174 01:11:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.174 01:11:53 -- scripts/common.sh@354 -- # echo 2 00:29:19.174 01:11:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:19.174 01:11:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:19.174 01:11:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:19.174 01:11:53 -- scripts/common.sh@367 -- # return 0 00:29:19.174 01:11:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.174 01:11:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:19.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.174 --rc genhtml_branch_coverage=1 00:29:19.175 --rc genhtml_function_coverage=1 00:29:19.175 --rc genhtml_legend=1 00:29:19.175 --rc geninfo_all_blocks=1 00:29:19.175 --rc geninfo_unexecuted_blocks=1 00:29:19.175 00:29:19.175 ' 00:29:19.175 01:11:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:19.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.175 --rc genhtml_branch_coverage=1 00:29:19.175 --rc genhtml_function_coverage=1 00:29:19.175 --rc genhtml_legend=1 00:29:19.175 --rc geninfo_all_blocks=1 00:29:19.175 --rc geninfo_unexecuted_blocks=1 00:29:19.175 00:29:19.175 ' 00:29:19.175 01:11:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:19.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.175 --rc genhtml_branch_coverage=1 00:29:19.175 --rc genhtml_function_coverage=1 00:29:19.175 --rc genhtml_legend=1 00:29:19.175 --rc geninfo_all_blocks=1 00:29:19.175 --rc geninfo_unexecuted_blocks=1 00:29:19.175 00:29:19.175 ' 00:29:19.175 01:11:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:19.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.175 --rc genhtml_branch_coverage=1 00:29:19.175 --rc genhtml_function_coverage=1 00:29:19.175 --rc genhtml_legend=1 00:29:19.175 --rc geninfo_all_blocks=1 00:29:19.175 --rc geninfo_unexecuted_blocks=1 00:29:19.175 00:29:19.175 ' 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:19.175 01:11:53 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:19.175 01:11:53 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:19.175 01:11:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:19.175 01:11:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:19.175 01:11:53 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:19.175 01:11:53 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:19.175 01:11:53 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:19.175 01:11:53 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:19.175 01:11:53 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:19.175 01:11:53 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:19.175 01:11:53 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:19.175 01:11:53 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=149483 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:19.175 01:11:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 149483 00:29:19.175 01:11:53 -- common/autotest_common.sh@829 -- # '[' -z 149483 ']' 00:29:19.175 01:11:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.175 01:11:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.175 01:11:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.175 01:11:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.175 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:19.175 [2024-11-18 01:11:53.548851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:19.175 [2024-11-18 01:11:53.549124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149483 ] 00:29:19.433 [2024-11-18 01:11:53.754733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.692 [2024-11-18 01:11:53.852797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:19.692 [2024-11-18 01:11:53.853293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.692 [2024-11-18 01:11:53.853387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.692 [2024-11-18 01:11:53.853538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.692 [2024-11-18 01:11:53.853549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.260 01:11:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.260 01:11:54 -- common/autotest_common.sh@862 -- # return 0 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:29:20.260 01:11:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.260 01:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:20.260 nvme0n1 00:29:20.260 01:11:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_dnWIk.txt 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:20.260 01:11:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.260 01:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:20.260 true 00:29:20.260 01:11:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731892314 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=149511 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:20.260 01:11:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:22.162 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:22.162 01:11:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.162 01:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:22.162 [2024-11-18 01:11:56.556343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:22.162 [2024-11-18 01:11:56.556807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:22.162 [2024-11-18 01:11:56.556933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:22.162 [2024-11-18 01:11:56.556996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.162 [2024-11-18 01:11:56.559343] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:22.162 01:11:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.162 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 149511 00:29:22.162 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 149511 00:29:22.162 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 149511 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.421 01:11:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.421 01:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:22.421 01:11:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_dnWIk.txt 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_dnWIk.txt 00:29:22.421 01:11:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 149483 00:29:22.421 01:11:56 -- common/autotest_common.sh@936 -- # '[' -z 149483 ']' 00:29:22.421 01:11:56 -- common/autotest_common.sh@940 -- # kill -0 149483 00:29:22.421 01:11:56 -- common/autotest_common.sh@941 -- # uname 00:29:22.421 01:11:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:22.421 01:11:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149483 00:29:22.421 01:11:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:22.421 killing process with pid 149483 00:29:22.421 01:11:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:22.421 01:11:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149483' 00:29:22.421 01:11:56 -- common/autotest_common.sh@955 -- # kill 149483 00:29:22.421 01:11:56 -- common/autotest_common.sh@960 -- # wait 149483 00:29:22.989 01:11:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:22.989 01:11:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:22.989 00:29:22.989 real 0m4.215s 00:29:22.989 user 0m14.075s 00:29:22.989 sys 0m0.884s 00:29:22.989 01:11:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:22.989 ************************************ 00:29:22.989 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:22.989 ************************************ 00:29:22.989 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:23.248 01:11:57 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:23.248 01:11:57 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:23.248 01:11:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:23.248 01:11:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:23.248 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:23.248 ************************************ 00:29:23.248 START TEST nvme_fio 00:29:23.248 ************************************ 00:29:23.248 01:11:57 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:29:23.248 01:11:57 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:23.248 01:11:57 -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:23.248 01:11:57 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:23.248 01:11:57 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:23.248 01:11:57 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:23.248 01:11:57 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:23.248 01:11:57 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:23.248 01:11:57 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:23.248 01:11:57 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:23.248 01:11:57 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:23.248 01:11:57 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:29:23.248 01:11:57 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:23.248 01:11:57 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:23.248 01:11:57 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:23.248 01:11:57 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:23.507 01:11:57 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:23.507 01:11:57 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:23.765 01:11:57 -- nvme/nvme.sh@41 -- # bs=4096 00:29:23.765 01:11:57 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:23.765 01:11:57 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:23.765 01:11:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:23.765 01:11:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:23.765 01:11:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:23.765 01:11:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:23.765 01:11:57 -- common/autotest_common.sh@1330 -- # shift 00:29:23.765 01:11:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:23.765 01:11:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.765 01:11:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:23.765 01:11:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:23.765 01:11:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:23.766 01:11:57 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:29:23.766 01:11:57 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:29:23.766 01:11:57 -- common/autotest_common.sh@1336 -- # break 00:29:23.766 01:11:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:23.766 01:11:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:23.766 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:23.766 fio-3.35 00:29:23.766 Starting 1 thread 00:29:27.088 00:29:27.088 test: (groupid=0, jobs=1): err= 0: pid=149646: Mon Nov 18 01:12:01 2024 00:29:27.088 read: IOPS=19.7k, BW=76.8MiB/s (80.5MB/s)(154MiB/2001msec) 00:29:27.088 slat (usec): min=3, max=171, avg= 4.87, stdev= 2.73 00:29:27.088 clat (usec): min=260, max=8592, avg=3236.95, stdev=232.15 00:29:27.088 lat (usec): min=265, max=8689, avg=3241.82, stdev=232.37 00:29:27.088 clat percentiles (usec): 00:29:27.088 | 1.00th=[ 2868], 5.00th=[ 2966], 10.00th=[ 3032], 20.00th=[ 3097], 00:29:27.088 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3228], 60.00th=[ 3261], 00:29:27.088 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3425], 95.00th=[ 3523], 00:29:27.088 | 99.00th=[ 3720], 99.50th=[ 3785], 99.90th=[ 5669], 99.95th=[ 7701], 00:29:27.088 | 99.99th=[ 8455] 00:29:27.088 bw ( KiB/s): min=77133, max=78968, per=99.37%, avg=78143.00, stdev=931.38, samples=3 00:29:27.088 iops : min=19283, max=19742, avg=19535.67, stdev=232.98, samples=3 00:29:27.088 write: IOPS=19.6k, BW=76.7MiB/s (80.4MB/s)(153MiB/2001msec); 0 zone resets 00:29:27.088 slat (nsec): min=3897, max=37362, avg=5128.95, stdev=2376.40 00:29:27.088 clat (usec): min=244, max=8509, avg=3257.72, stdev=236.59 00:29:27.088 lat (usec): min=249, max=8539, avg=3262.85, stdev=236.76 00:29:27.088 clat percentiles (usec): 00:29:27.088 | 1.00th=[ 2868], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3130], 00:29:27.088 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3261], 60.00th=[ 3294], 00:29:27.088 | 70.00th=[ 3326], 80.00th=[ 3392], 90.00th=[ 3458], 95.00th=[ 3523], 00:29:27.088 | 99.00th=[ 3720], 99.50th=[ 3818], 99.90th=[ 5932], 99.95th=[ 7832], 00:29:27.088 | 99.99th=[ 8455] 00:29:27.088 bw ( KiB/s): min=76958, max=79224, per=99.67%, avg=78266.00, stdev=1172.84, samples=3 00:29:27.088 iops : min=19239, max=19806, avg=19566.33, stdev=293.49, samples=3 00:29:27.088 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:29:27.088 lat (msec) : 2=0.09%, 4=99.63%, 10=0.24% 00:29:27.088 cpu : usr=99.40%, sys=0.40%, ctx=27, majf=0, minf=39 00:29:27.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:27.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:27.088 issued rwts: total=39337,39283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:27.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:27.088 00:29:27.088 Run status group 0 (all jobs): 00:29:27.088 READ: bw=76.8MiB/s (80.5MB/s), 76.8MiB/s-76.8MiB/s (80.5MB/s-80.5MB/s), io=154MiB (161MB), run=2001-2001msec 00:29:27.088 WRITE: bw=76.7MiB/s (80.4MB/s), 76.7MiB/s-76.7MiB/s (80.4MB/s-80.4MB/s), io=153MiB (161MB), run=2001-2001msec 00:29:27.347 ----------------------------------------------------- 00:29:27.347 Suppressions used: 00:29:27.347 count bytes template 00:29:27.347 1 32 /usr/src/fio/parse.c 00:29:27.347 ----------------------------------------------------- 00:29:27.347 00:29:27.347 01:12:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:27.347 01:12:01 -- nvme/nvme.sh@46 -- # true 00:29:27.347 ************************************ 00:29:27.347 END TEST nvme_fio 00:29:27.347 ************************************ 00:29:27.347 00:29:27.347 real 0m4.276s 00:29:27.347 user 0m3.531s 00:29:27.347 sys 0m0.442s 00:29:27.347 01:12:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:27.347 01:12:01 -- common/autotest_common.sh@10 -- # set +x 00:29:27.606 00:29:27.606 real 0m47.174s 00:29:27.606 user 1m57.271s 00:29:27.606 sys 0m11.746s 00:29:27.606 01:12:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:27.606 01:12:01 -- common/autotest_common.sh@10 -- # set +x 00:29:27.606 ************************************ 00:29:27.606 END TEST nvme 00:29:27.606 ************************************ 00:29:27.606 01:12:01 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:29:27.606 01:12:01 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:27.606 01:12:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:27.606 01:12:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:27.606 01:12:01 -- common/autotest_common.sh@10 -- # set +x 00:29:27.606 ************************************ 00:29:27.606 START TEST nvme_scc 00:29:27.607 ************************************ 00:29:27.607 01:12:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:27.607 * Looking for test storage... 00:29:27.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:27.607 01:12:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:27.607 01:12:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:27.607 01:12:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:27.867 01:12:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:27.867 01:12:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:27.867 01:12:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:27.867 01:12:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:27.867 01:12:02 -- scripts/common.sh@335 -- # IFS=.-: 00:29:27.867 01:12:02 -- scripts/common.sh@335 -- # read -ra ver1 00:29:27.867 01:12:02 -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.867 01:12:02 -- scripts/common.sh@336 -- # read -ra ver2 00:29:27.867 01:12:02 -- scripts/common.sh@337 -- # local 'op=<' 00:29:27.867 01:12:02 -- scripts/common.sh@339 -- # ver1_l=2 00:29:27.867 01:12:02 -- scripts/common.sh@340 -- # ver2_l=1 00:29:27.867 01:12:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:27.867 01:12:02 -- scripts/common.sh@343 -- # case "$op" in 00:29:27.867 01:12:02 -- scripts/common.sh@344 -- # : 1 00:29:27.867 01:12:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:27.867 01:12:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.867 01:12:02 -- scripts/common.sh@364 -- # decimal 1 00:29:27.867 01:12:02 -- scripts/common.sh@352 -- # local d=1 00:29:27.867 01:12:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.867 01:12:02 -- scripts/common.sh@354 -- # echo 1 00:29:27.867 01:12:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:27.867 01:12:02 -- scripts/common.sh@365 -- # decimal 2 00:29:27.867 01:12:02 -- scripts/common.sh@352 -- # local d=2 00:29:27.867 01:12:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.867 01:12:02 -- scripts/common.sh@354 -- # echo 2 00:29:27.867 01:12:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:27.867 01:12:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:27.867 01:12:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:27.867 01:12:02 -- scripts/common.sh@367 -- # return 0 00:29:27.867 01:12:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.867 01:12:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:27.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.867 --rc genhtml_branch_coverage=1 00:29:27.867 --rc genhtml_function_coverage=1 00:29:27.867 --rc genhtml_legend=1 00:29:27.867 --rc geninfo_all_blocks=1 00:29:27.867 --rc geninfo_unexecuted_blocks=1 00:29:27.867 00:29:27.867 ' 00:29:27.867 01:12:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:27.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.867 --rc genhtml_branch_coverage=1 00:29:27.867 --rc genhtml_function_coverage=1 00:29:27.867 --rc genhtml_legend=1 00:29:27.867 --rc geninfo_all_blocks=1 00:29:27.867 --rc geninfo_unexecuted_blocks=1 00:29:27.867 00:29:27.867 ' 00:29:27.867 01:12:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:27.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.867 --rc genhtml_branch_coverage=1 00:29:27.867 --rc genhtml_function_coverage=1 00:29:27.867 --rc genhtml_legend=1 00:29:27.867 --rc geninfo_all_blocks=1 00:29:27.867 --rc geninfo_unexecuted_blocks=1 00:29:27.867 00:29:27.867 ' 00:29:27.867 01:12:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:27.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.867 --rc genhtml_branch_coverage=1 00:29:27.867 --rc genhtml_function_coverage=1 00:29:27.867 --rc genhtml_legend=1 00:29:27.867 --rc geninfo_all_blocks=1 00:29:27.867 --rc geninfo_unexecuted_blocks=1 00:29:27.867 00:29:27.867 ' 00:29:27.867 01:12:02 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:27.867 01:12:02 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:27.867 01:12:02 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:27.867 01:12:02 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:27.867 01:12:02 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:27.867 01:12:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.867 01:12:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.867 01:12:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.867 01:12:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:27.867 01:12:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:27.867 01:12:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:27.867 01:12:02 -- paths/export.sh@5 -- # export PATH 00:29:27.867 01:12:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:27.867 01:12:02 -- nvme/functions.sh@10 -- # ctrls=() 00:29:27.867 01:12:02 -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:27.867 01:12:02 -- nvme/functions.sh@11 -- # nvmes=() 00:29:27.867 01:12:02 -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:27.867 01:12:02 -- nvme/functions.sh@12 -- # bdfs=() 00:29:27.867 01:12:02 -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:27.867 01:12:02 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:27.867 01:12:02 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:27.867 01:12:02 -- nvme/functions.sh@14 -- # nvme_name= 00:29:27.867 01:12:02 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:27.867 01:12:02 -- nvme/nvme_scc.sh@12 -- # uname 00:29:27.867 01:12:02 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:29:27.867 01:12:02 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:29:27.867 01:12:02 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:28.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:28.386 Waiting for block devices as requested 00:29:28.386 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:28.386 01:12:02 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:29:28.386 01:12:02 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:28.387 01:12:02 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:28.387 01:12:02 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:29:28.387 01:12:02 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:29:28.387 01:12:02 -- scripts/common.sh@15 -- # local i 00:29:28.387 01:12:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:28.387 01:12:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:28.387 01:12:02 -- scripts/common.sh@24 -- # return 0 00:29:28.387 01:12:02 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:28.387 01:12:02 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:28.387 01:12:02 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@18 -- # shift 00:29:28.387 01:12:02 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.387 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:28.387 01:12:02 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:28.387 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.388 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.388 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.388 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.388 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.388 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.388 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:28.388 01:12:02 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.649 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.649 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.649 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.650 01:12:02 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:28.650 01:12:02 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.650 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:28.651 01:12:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:28.651 01:12:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:28.651 01:12:02 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:28.651 01:12:02 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@18 -- # shift 00:29:28.651 01:12:02 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:28.651 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.651 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.651 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:28.652 01:12:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # IFS=: 00:29:28.652 01:12:02 -- nvme/functions.sh@21 -- # read -r reg val 00:29:28.652 01:12:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:28.652 01:12:02 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:28.652 01:12:02 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:29:28.652 01:12:02 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:29:28.652 01:12:02 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:29:28.652 01:12:02 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:29:28.652 01:12:02 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:28.652 01:12:02 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:29:28.652 01:12:02 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:29:28.652 01:12:02 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:29:28.652 01:12:02 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:29:28.652 01:12:02 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:28.652 01:12:02 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:29:28.652 01:12:02 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:29:28.652 01:12:02 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:29:28.652 01:12:02 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:28.652 01:12:02 -- nvme/functions.sh@76 -- # echo 0x15d 00:29:28.652 01:12:02 -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:28.652 01:12:02 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:28.652 01:12:02 -- nvme/functions.sh@197 -- # echo nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:29:28.652 01:12:02 -- nvme/functions.sh@206 -- # echo nvme0 00:29:28.652 01:12:02 -- nvme/functions.sh@207 -- # return 0 00:29:28.652 01:12:02 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:29:28.652 01:12:02 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:29:28.652 01:12:02 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:29.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:29.222 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:31.130 01:12:05 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:31.130 01:12:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:29:31.130 01:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:31.130 01:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:31.130 ************************************ 00:29:31.130 START TEST nvme_simple_copy 00:29:31.130 ************************************ 00:29:31.130 01:12:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:31.388 Initializing NVMe Controllers 00:29:31.388 Attaching to 0000:00:06.0 00:29:31.388 Controller supports SCC. Attached to 0000:00:06.0 00:29:31.388 Namespace ID: 1 size: 5GB 00:29:31.389 Initialization complete. 00:29:31.389 00:29:31.389 Controller QEMU NVMe Ctrl (12340 ) 00:29:31.389 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:29:31.389 Namespace Block Size:4096 00:29:31.389 Writing LBAs 0 to 63 with Random Data 00:29:31.389 Copied LBAs from 0 - 63 to the Destination LBA 256 00:29:31.389 LBAs matching Written Data: 64 00:29:31.389 00:29:31.389 real 0m0.308s 00:29:31.389 user 0m0.104s 00:29:31.389 sys 0m0.106s 00:29:31.389 01:12:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:31.389 ************************************ 00:29:31.389 END TEST nvme_simple_copy 00:29:31.389 ************************************ 00:29:31.389 01:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:31.648 00:29:31.648 real 0m3.969s 00:29:31.648 user 0m0.854s 00:29:31.648 sys 0m3.031s 00:29:31.648 01:12:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:31.648 ************************************ 00:29:31.648 END TEST nvme_scc 00:29:31.648 ************************************ 00:29:31.648 01:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:31.648 01:12:05 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:29:31.648 01:12:05 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:29:31.648 01:12:05 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:29:31.648 01:12:05 -- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]] 00:29:31.648 01:12:05 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:29:31.648 01:12:05 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:31.648 01:12:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:31.648 01:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:31.648 01:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:31.648 ************************************ 00:29:31.648 START TEST nvme_rpc 00:29:31.648 ************************************ 00:29:31.648 01:12:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:31.648 * Looking for test storage... 00:29:31.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:31.648 01:12:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:31.648 01:12:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:31.648 01:12:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:31.908 01:12:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:31.908 01:12:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:31.908 01:12:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:31.908 01:12:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:31.908 01:12:06 -- scripts/common.sh@335 -- # IFS=.-: 00:29:31.908 01:12:06 -- scripts/common.sh@335 -- # read -ra ver1 00:29:31.908 01:12:06 -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.908 01:12:06 -- scripts/common.sh@336 -- # read -ra ver2 00:29:31.908 01:12:06 -- scripts/common.sh@337 -- # local 'op=<' 00:29:31.908 01:12:06 -- scripts/common.sh@339 -- # ver1_l=2 00:29:31.908 01:12:06 -- scripts/common.sh@340 -- # ver2_l=1 00:29:31.908 01:12:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:31.908 01:12:06 -- scripts/common.sh@343 -- # case "$op" in 00:29:31.908 01:12:06 -- scripts/common.sh@344 -- # : 1 00:29:31.908 01:12:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:31.908 01:12:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.908 01:12:06 -- scripts/common.sh@364 -- # decimal 1 00:29:31.908 01:12:06 -- scripts/common.sh@352 -- # local d=1 00:29:31.908 01:12:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.908 01:12:06 -- scripts/common.sh@354 -- # echo 1 00:29:31.908 01:12:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:31.908 01:12:06 -- scripts/common.sh@365 -- # decimal 2 00:29:31.908 01:12:06 -- scripts/common.sh@352 -- # local d=2 00:29:31.908 01:12:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.908 01:12:06 -- scripts/common.sh@354 -- # echo 2 00:29:31.908 01:12:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:31.908 01:12:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:31.908 01:12:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:31.908 01:12:06 -- scripts/common.sh@367 -- # return 0 00:29:31.908 01:12:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.908 01:12:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.908 --rc genhtml_branch_coverage=1 00:29:31.908 --rc genhtml_function_coverage=1 00:29:31.908 --rc genhtml_legend=1 00:29:31.908 --rc geninfo_all_blocks=1 00:29:31.908 --rc geninfo_unexecuted_blocks=1 00:29:31.908 00:29:31.908 ' 00:29:31.908 01:12:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.908 --rc genhtml_branch_coverage=1 00:29:31.908 --rc genhtml_function_coverage=1 00:29:31.908 --rc genhtml_legend=1 00:29:31.908 --rc geninfo_all_blocks=1 00:29:31.908 --rc geninfo_unexecuted_blocks=1 00:29:31.908 00:29:31.908 ' 00:29:31.908 01:12:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.908 --rc genhtml_branch_coverage=1 00:29:31.908 --rc genhtml_function_coverage=1 00:29:31.908 --rc genhtml_legend=1 00:29:31.908 --rc geninfo_all_blocks=1 00:29:31.908 --rc geninfo_unexecuted_blocks=1 00:29:31.908 00:29:31.908 ' 00:29:31.908 01:12:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.909 --rc genhtml_branch_coverage=1 00:29:31.909 --rc genhtml_function_coverage=1 00:29:31.909 --rc genhtml_legend=1 00:29:31.909 --rc geninfo_all_blocks=1 00:29:31.909 --rc geninfo_unexecuted_blocks=1 00:29:31.909 00:29:31.909 ' 00:29:31.909 01:12:06 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.909 01:12:06 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:31.909 01:12:06 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:31.909 01:12:06 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:31.909 01:12:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:31.909 01:12:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:31.909 01:12:06 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:31.909 01:12:06 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:31.909 01:12:06 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:31.909 01:12:06 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:31.909 01:12:06 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:31.909 01:12:06 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:31.909 01:12:06 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:31.909 01:12:06 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:31.909 01:12:06 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:29:31.909 01:12:06 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=150155 00:29:31.909 01:12:06 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:31.909 01:12:06 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 150155 00:29:31.909 01:12:06 -- common/autotest_common.sh@829 -- # '[' -z 150155 ']' 00:29:31.909 01:12:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.909 01:12:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.909 01:12:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.909 01:12:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.909 01:12:06 -- common/autotest_common.sh@10 -- # set +x 00:29:31.909 01:12:06 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:31.909 [2024-11-18 01:12:06.276473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:31.909 [2024-11-18 01:12:06.276709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150155 ] 00:29:32.168 [2024-11-18 01:12:06.441205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:32.168 [2024-11-18 01:12:06.559906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:32.168 [2024-11-18 01:12:06.560402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.168 [2024-11-18 01:12:06.560399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.106 01:12:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.106 01:12:07 -- common/autotest_common.sh@862 -- # return 0 00:29:33.106 01:12:07 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:33.106 Nvme0n1 00:29:33.365 01:12:07 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:33.365 01:12:07 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:33.365 request: 00:29:33.365 { 00:29:33.365 "filename": "non_existing_file", 00:29:33.365 "bdev_name": "Nvme0n1", 00:29:33.365 "method": "bdev_nvme_apply_firmware", 00:29:33.365 "req_id": 1 00:29:33.365 } 00:29:33.365 Got JSON-RPC error response 00:29:33.365 response: 00:29:33.365 { 00:29:33.365 "code": -32603, 00:29:33.365 "message": "open file failed." 00:29:33.365 } 00:29:33.365 01:12:07 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:33.365 01:12:07 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:33.365 01:12:07 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:33.624 01:12:07 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:33.624 01:12:07 -- nvme/nvme_rpc.sh@40 -- # killprocess 150155 00:29:33.624 01:12:07 -- common/autotest_common.sh@936 -- # '[' -z 150155 ']' 00:29:33.624 01:12:07 -- common/autotest_common.sh@940 -- # kill -0 150155 00:29:33.624 01:12:07 -- common/autotest_common.sh@941 -- # uname 00:29:33.624 01:12:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:33.624 01:12:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150155 00:29:33.624 01:12:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:33.624 killing process with pid 150155 00:29:33.624 01:12:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:33.624 01:12:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150155' 00:29:33.624 01:12:07 -- common/autotest_common.sh@955 -- # kill 150155 00:29:33.624 01:12:07 -- common/autotest_common.sh@960 -- # wait 150155 00:29:34.564 00:29:34.564 real 0m2.750s 00:29:34.564 user 0m4.925s 00:29:34.564 sys 0m0.844s 00:29:34.564 01:12:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:34.564 01:12:08 -- common/autotest_common.sh@10 -- # set +x 00:29:34.564 ************************************ 00:29:34.564 END TEST nvme_rpc 00:29:34.564 ************************************ 00:29:34.564 01:12:08 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:34.564 01:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:34.564 01:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:34.564 01:12:08 -- common/autotest_common.sh@10 -- # set +x 00:29:34.564 ************************************ 00:29:34.564 START TEST nvme_rpc_timeouts 00:29:34.564 ************************************ 00:29:34.564 01:12:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:34.564 * Looking for test storage... 00:29:34.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:34.564 01:12:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:34.564 01:12:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:34.564 01:12:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:34.564 01:12:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:34.564 01:12:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:34.564 01:12:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:34.564 01:12:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:34.564 01:12:08 -- scripts/common.sh@335 -- # IFS=.-: 00:29:34.564 01:12:08 -- scripts/common.sh@335 -- # read -ra ver1 00:29:34.564 01:12:08 -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.564 01:12:08 -- scripts/common.sh@336 -- # read -ra ver2 00:29:34.564 01:12:08 -- scripts/common.sh@337 -- # local 'op=<' 00:29:34.564 01:12:08 -- scripts/common.sh@339 -- # ver1_l=2 00:29:34.564 01:12:08 -- scripts/common.sh@340 -- # ver2_l=1 00:29:34.564 01:12:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:34.564 01:12:08 -- scripts/common.sh@343 -- # case "$op" in 00:29:34.564 01:12:08 -- scripts/common.sh@344 -- # : 1 00:29:34.564 01:12:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:34.564 01:12:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.564 01:12:08 -- scripts/common.sh@364 -- # decimal 1 00:29:34.564 01:12:08 -- scripts/common.sh@352 -- # local d=1 00:29:34.564 01:12:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.564 01:12:08 -- scripts/common.sh@354 -- # echo 1 00:29:34.564 01:12:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:34.564 01:12:08 -- scripts/common.sh@365 -- # decimal 2 00:29:34.564 01:12:08 -- scripts/common.sh@352 -- # local d=2 00:29:34.564 01:12:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.564 01:12:08 -- scripts/common.sh@354 -- # echo 2 00:29:34.564 01:12:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:34.564 01:12:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:34.564 01:12:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:34.564 01:12:08 -- scripts/common.sh@367 -- # return 0 00:29:34.564 01:12:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.564 01:12:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.564 --rc genhtml_branch_coverage=1 00:29:34.564 --rc genhtml_function_coverage=1 00:29:34.564 --rc genhtml_legend=1 00:29:34.564 --rc geninfo_all_blocks=1 00:29:34.564 --rc geninfo_unexecuted_blocks=1 00:29:34.564 00:29:34.564 ' 00:29:34.564 01:12:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.564 --rc genhtml_branch_coverage=1 00:29:34.564 --rc genhtml_function_coverage=1 00:29:34.564 --rc genhtml_legend=1 00:29:34.564 --rc geninfo_all_blocks=1 00:29:34.564 --rc geninfo_unexecuted_blocks=1 00:29:34.564 00:29:34.564 ' 00:29:34.564 01:12:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.564 --rc genhtml_branch_coverage=1 00:29:34.564 --rc genhtml_function_coverage=1 00:29:34.564 --rc genhtml_legend=1 00:29:34.564 --rc geninfo_all_blocks=1 00:29:34.564 --rc geninfo_unexecuted_blocks=1 00:29:34.564 00:29:34.564 ' 00:29:34.564 01:12:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.564 --rc genhtml_branch_coverage=1 00:29:34.564 --rc genhtml_function_coverage=1 00:29:34.564 --rc genhtml_legend=1 00:29:34.564 --rc geninfo_all_blocks=1 00:29:34.564 --rc geninfo_unexecuted_blocks=1 00:29:34.564 00:29:34.564 ' 00:29:34.564 01:12:08 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:34.564 01:12:08 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_150224 00:29:34.564 01:12:08 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_150224 00:29:34.564 01:12:08 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=150262 00:29:34.564 01:12:08 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:34.564 01:12:08 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:34.564 01:12:08 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 150262 00:29:34.564 01:12:08 -- common/autotest_common.sh@829 -- # '[' -z 150262 ']' 00:29:34.564 01:12:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.564 01:12:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.564 01:12:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.564 01:12:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.564 01:12:08 -- common/autotest_common.sh@10 -- # set +x 00:29:34.824 [2024-11-18 01:12:09.016554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:34.824 [2024-11-18 01:12:09.016851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150262 ] 00:29:34.824 [2024-11-18 01:12:09.176365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:35.083 [2024-11-18 01:12:09.250359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:35.083 [2024-11-18 01:12:09.250835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.083 [2024-11-18 01:12:09.250834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.651 01:12:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.651 Checking default timeout settings: 00:29:35.651 01:12:09 -- common/autotest_common.sh@862 -- # return 0 00:29:35.651 01:12:09 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:35.651 01:12:09 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:35.910 Making settings changes with rpc: 00:29:35.910 01:12:10 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:35.910 01:12:10 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:36.168 Check default vs. modified settings: 00:29:36.168 01:12:10 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:36.168 01:12:10 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_150224 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_150224 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:36.428 Setting action_on_timeout is changed as expected. 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_150224 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_150224 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:36.428 Setting timeout_us is changed as expected. 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_150224 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_150224 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:36.428 Setting timeout_admin_us is changed as expected. 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_150224 /tmp/settings_modified_150224 00:29:36.428 01:12:10 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 150262 00:29:36.428 01:12:10 -- common/autotest_common.sh@936 -- # '[' -z 150262 ']' 00:29:36.428 01:12:10 -- common/autotest_common.sh@940 -- # kill -0 150262 00:29:36.428 01:12:10 -- common/autotest_common.sh@941 -- # uname 00:29:36.428 01:12:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:36.428 01:12:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150262 00:29:36.428 01:12:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:36.428 killing process with pid 150262 00:29:36.428 01:12:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:36.428 01:12:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150262' 00:29:36.428 01:12:10 -- common/autotest_common.sh@955 -- # kill 150262 00:29:36.428 01:12:10 -- common/autotest_common.sh@960 -- # wait 150262 00:29:37.367 RPC TIMEOUT SETTING TEST PASSED. 00:29:37.367 01:12:11 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:29:37.367 ************************************ 00:29:37.367 END TEST nvme_rpc_timeouts 00:29:37.367 ************************************ 00:29:37.367 00:29:37.367 real 0m2.720s 00:29:37.367 user 0m5.053s 00:29:37.367 sys 0m0.720s 00:29:37.367 01:12:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:37.368 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:29:37.368 01:12:11 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]] 00:29:37.368 01:12:11 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@255 -- # timing_exit lib 00:29:37.368 01:12:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.368 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:29:37.368 01:12:11 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:37.368 01:12:11 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:37.368 01:12:11 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:37.368 01:12:11 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:37.368 01:12:11 -- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]] 00:29:37.368 01:12:11 -- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:37.368 01:12:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:37.368 01:12:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:37.368 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:29:37.368 ************************************ 00:29:37.368 START TEST blockdev_raid5f 00:29:37.368 ************************************ 00:29:37.368 01:12:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:37.368 * Looking for test storage... 00:29:37.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:37.368 01:12:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:37.368 01:12:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:37.368 01:12:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:37.368 01:12:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:37.368 01:12:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:37.368 01:12:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:37.368 01:12:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:37.368 01:12:11 -- scripts/common.sh@335 -- # IFS=.-: 00:29:37.368 01:12:11 -- scripts/common.sh@335 -- # read -ra ver1 00:29:37.368 01:12:11 -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.368 01:12:11 -- scripts/common.sh@336 -- # read -ra ver2 00:29:37.368 01:12:11 -- scripts/common.sh@337 -- # local 'op=<' 00:29:37.368 01:12:11 -- scripts/common.sh@339 -- # ver1_l=2 00:29:37.368 01:12:11 -- scripts/common.sh@340 -- # ver2_l=1 00:29:37.368 01:12:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:37.368 01:12:11 -- scripts/common.sh@343 -- # case "$op" in 00:29:37.368 01:12:11 -- scripts/common.sh@344 -- # : 1 00:29:37.368 01:12:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:37.368 01:12:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.368 01:12:11 -- scripts/common.sh@364 -- # decimal 1 00:29:37.368 01:12:11 -- scripts/common.sh@352 -- # local d=1 00:29:37.368 01:12:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.368 01:12:11 -- scripts/common.sh@354 -- # echo 1 00:29:37.368 01:12:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:37.368 01:12:11 -- scripts/common.sh@365 -- # decimal 2 00:29:37.368 01:12:11 -- scripts/common.sh@352 -- # local d=2 00:29:37.368 01:12:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.368 01:12:11 -- scripts/common.sh@354 -- # echo 2 00:29:37.368 01:12:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:37.368 01:12:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:37.368 01:12:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:37.368 01:12:11 -- scripts/common.sh@367 -- # return 0 00:29:37.368 01:12:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.368 01:12:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:37.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.368 --rc genhtml_branch_coverage=1 00:29:37.368 --rc genhtml_function_coverage=1 00:29:37.368 --rc genhtml_legend=1 00:29:37.368 --rc geninfo_all_blocks=1 00:29:37.368 --rc geninfo_unexecuted_blocks=1 00:29:37.368 00:29:37.368 ' 00:29:37.368 01:12:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:37.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.368 --rc genhtml_branch_coverage=1 00:29:37.368 --rc genhtml_function_coverage=1 00:29:37.368 --rc genhtml_legend=1 00:29:37.368 --rc geninfo_all_blocks=1 00:29:37.368 --rc geninfo_unexecuted_blocks=1 00:29:37.368 00:29:37.368 ' 00:29:37.368 01:12:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:37.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.368 --rc genhtml_branch_coverage=1 00:29:37.368 --rc genhtml_function_coverage=1 00:29:37.368 --rc genhtml_legend=1 00:29:37.368 --rc geninfo_all_blocks=1 00:29:37.368 --rc geninfo_unexecuted_blocks=1 00:29:37.368 00:29:37.368 ' 00:29:37.368 01:12:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:37.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.368 --rc genhtml_branch_coverage=1 00:29:37.368 --rc genhtml_function_coverage=1 00:29:37.368 --rc genhtml_legend=1 00:29:37.368 --rc geninfo_all_blocks=1 00:29:37.368 --rc geninfo_unexecuted_blocks=1 00:29:37.368 00:29:37.368 ' 00:29:37.368 01:12:11 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:37.368 01:12:11 -- bdev/nbd_common.sh@6 -- # set -e 00:29:37.368 01:12:11 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:37.368 01:12:11 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:37.368 01:12:11 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:37.368 01:12:11 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:37.368 01:12:11 -- bdev/blockdev.sh@18 -- # : 00:29:37.368 01:12:11 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:37.368 01:12:11 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:37.368 01:12:11 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:37.368 01:12:11 -- bdev/blockdev.sh@672 -- # uname -s 00:29:37.368 01:12:11 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:37.368 01:12:11 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:37.368 01:12:11 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:29:37.368 01:12:11 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:37.368 01:12:11 -- bdev/blockdev.sh@682 -- # dek= 00:29:37.368 01:12:11 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:37.368 01:12:11 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:37.368 01:12:11 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:37.368 01:12:11 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:29:37.368 01:12:11 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:29:37.368 01:12:11 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:37.368 01:12:11 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=150409 00:29:37.368 01:12:11 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:37.368 01:12:11 -- bdev/blockdev.sh@47 -- # waitforlisten 150409 00:29:37.368 01:12:11 -- common/autotest_common.sh@829 -- # '[' -z 150409 ']' 00:29:37.368 01:12:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.368 01:12:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.368 01:12:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.368 01:12:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.368 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:29:37.368 01:12:11 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:37.628 [2024-11-18 01:12:11.817316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:37.628 [2024-11-18 01:12:11.817595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150409 ] 00:29:37.628 [2024-11-18 01:12:11.974248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.887 [2024-11-18 01:12:12.056366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:37.887 [2024-11-18 01:12:12.056605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.453 01:12:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.453 01:12:12 -- common/autotest_common.sh@862 -- # return 0 00:29:38.453 01:12:12 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:38.453 01:12:12 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:29:38.453 01:12:12 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:29:38.453 01:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.453 01:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.453 Malloc0 00:29:38.453 Malloc1 00:29:38.453 Malloc2 00:29:38.453 01:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.453 01:12:12 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:38.453 01:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.453 01:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.453 01:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.453 01:12:12 -- bdev/blockdev.sh@738 -- # cat 00:29:38.453 01:12:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:38.453 01:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.453 01:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.453 01:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.453 01:12:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:38.453 01:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.453 01:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.712 01:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.712 01:12:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:38.712 01:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.712 01:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.712 01:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.712 01:12:12 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:38.712 01:12:12 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:38.712 01:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.712 01:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.712 01:12:12 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:38.712 01:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.712 01:12:12 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:38.712 01:12:12 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:38.713 01:12:12 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "840daeb7-2293-4db1-a2ef-debbd68cfe87"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "840daeb7-2293-4db1-a2ef-debbd68cfe87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "840daeb7-2293-4db1-a2ef-debbd68cfe87",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1715e25a-7408-4162-92bf-1ae9b19a612a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fbf16805-ee16-4cf0-b64b-4088bcde3b4f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fda93a61-b7f1-4938-9200-71e1bb0619c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:38.713 01:12:12 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:38.713 01:12:12 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:29:38.713 01:12:12 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:38.713 01:12:12 -- bdev/blockdev.sh@752 -- # killprocess 150409 00:29:38.713 01:12:12 -- common/autotest_common.sh@936 -- # '[' -z 150409 ']' 00:29:38.713 01:12:12 -- common/autotest_common.sh@940 -- # kill -0 150409 00:29:38.713 01:12:12 -- common/autotest_common.sh@941 -- # uname 00:29:38.713 01:12:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:38.713 01:12:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150409 00:29:38.713 01:12:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:38.713 killing process with pid 150409 00:29:38.713 01:12:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:38.713 01:12:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150409' 00:29:38.713 01:12:12 -- common/autotest_common.sh@955 -- # kill 150409 00:29:38.713 01:12:12 -- common/autotest_common.sh@960 -- # wait 150409 00:29:39.650 01:12:13 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:39.650 01:12:13 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:39.650 01:12:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:29:39.650 01:12:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:39.650 01:12:13 -- common/autotest_common.sh@10 -- # set +x 00:29:39.650 ************************************ 00:29:39.650 START TEST bdev_hello_world 00:29:39.650 ************************************ 00:29:39.650 01:12:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:39.650 [2024-11-18 01:12:13.834709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:39.650 [2024-11-18 01:12:13.834995] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150455 ] 00:29:39.650 [2024-11-18 01:12:13.989825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.909 [2024-11-18 01:12:14.072461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.169 [2024-11-18 01:12:14.342964] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:40.169 [2024-11-18 01:12:14.343059] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:29:40.169 [2024-11-18 01:12:14.343095] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:40.169 [2024-11-18 01:12:14.343501] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:40.169 [2024-11-18 01:12:14.343683] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:40.169 [2024-11-18 01:12:14.343716] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:40.169 [2024-11-18 01:12:14.343795] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:40.169 00:29:40.169 [2024-11-18 01:12:14.343839] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:40.428 00:29:40.428 real 0m1.040s 00:29:40.428 user 0m0.593s 00:29:40.428 sys 0m0.334s 00:29:40.428 01:12:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:40.428 01:12:14 -- common/autotest_common.sh@10 -- # set +x 00:29:40.428 ************************************ 00:29:40.428 END TEST bdev_hello_world 00:29:40.428 ************************************ 00:29:40.688 01:12:14 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:40.688 01:12:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:40.688 01:12:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:40.688 01:12:14 -- common/autotest_common.sh@10 -- # set +x 00:29:40.688 ************************************ 00:29:40.688 START TEST bdev_bounds 00:29:40.688 ************************************ 00:29:40.688 01:12:14 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:29:40.688 01:12:14 -- bdev/blockdev.sh@288 -- # bdevio_pid=150493 00:29:40.688 01:12:14 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:40.688 Process bdevio pid: 150493 00:29:40.688 01:12:14 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 150493' 00:29:40.688 01:12:14 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:40.688 01:12:14 -- bdev/blockdev.sh@291 -- # waitforlisten 150493 00:29:40.688 01:12:14 -- common/autotest_common.sh@829 -- # '[' -z 150493 ']' 00:29:40.688 01:12:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.688 01:12:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:40.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.688 01:12:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.688 01:12:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:40.688 01:12:14 -- common/autotest_common.sh@10 -- # set +x 00:29:40.688 [2024-11-18 01:12:14.957467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:40.688 [2024-11-18 01:12:14.957734] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150493 ] 00:29:40.946 [2024-11-18 01:12:15.123913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:40.946 [2024-11-18 01:12:15.210687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.946 [2024-11-18 01:12:15.210908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.946 [2024-11-18 01:12:15.210911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.514 01:12:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:41.514 01:12:15 -- common/autotest_common.sh@862 -- # return 0 00:29:41.514 01:12:15 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:41.774 I/O targets: 00:29:41.774 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:29:41.774 00:29:41.774 00:29:41.774 CUnit - A unit testing framework for C - Version 2.1-3 00:29:41.774 http://cunit.sourceforge.net/ 00:29:41.774 00:29:41.774 00:29:41.774 Suite: bdevio tests on: raid5f 00:29:41.774 Test: blockdev write read block ...passed 00:29:41.774 Test: blockdev write zeroes read block ...passed 00:29:41.774 Test: blockdev write zeroes read no split ...passed 00:29:41.774 Test: blockdev write zeroes read split ...passed 00:29:41.774 Test: blockdev write zeroes read split partial ...passed 00:29:41.774 Test: blockdev reset ...passed 00:29:41.774 Test: blockdev write read 8 blocks ...passed 00:29:41.774 Test: blockdev write read size > 128k ...passed 00:29:41.774 Test: blockdev write read invalid size ...passed 00:29:41.774 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:41.774 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:41.774 Test: blockdev write read max offset ...passed 00:29:41.774 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:41.774 Test: blockdev writev readv 8 blocks ...passed 00:29:41.774 Test: blockdev writev readv 30 x 1block ...passed 00:29:41.774 Test: blockdev writev readv block ...passed 00:29:41.774 Test: blockdev writev readv size > 128k ...passed 00:29:41.774 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:41.774 Test: blockdev comparev and writev ...passed 00:29:41.774 Test: blockdev nvme passthru rw ...passed 00:29:41.774 Test: blockdev nvme passthru vendor specific ...passed 00:29:41.774 Test: blockdev nvme admin passthru ...passed 00:29:41.774 Test: blockdev copy ...passed 00:29:41.774 00:29:41.774 Run Summary: Type Total Ran Passed Failed Inactive 00:29:41.774 suites 1 1 n/a 0 0 00:29:41.774 tests 23 23 23 0 0 00:29:41.774 asserts 130 130 130 0 n/a 00:29:41.774 00:29:41.774 Elapsed time = 0.283 seconds 00:29:41.774 0 00:29:41.774 01:12:16 -- bdev/blockdev.sh@293 -- # killprocess 150493 00:29:41.774 01:12:16 -- common/autotest_common.sh@936 -- # '[' -z 150493 ']' 00:29:41.774 01:12:16 -- common/autotest_common.sh@940 -- # kill -0 150493 00:29:41.774 01:12:16 -- common/autotest_common.sh@941 -- # uname 00:29:41.774 01:12:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.774 01:12:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150493 00:29:41.774 01:12:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:41.774 01:12:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:41.774 killing process with pid 150493 00:29:41.774 01:12:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150493' 00:29:41.774 01:12:16 -- common/autotest_common.sh@955 -- # kill 150493 00:29:41.774 01:12:16 -- common/autotest_common.sh@960 -- # wait 150493 00:29:42.342 01:12:16 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:42.342 00:29:42.342 real 0m1.680s 00:29:42.342 user 0m3.780s 00:29:42.342 sys 0m0.501s 00:29:42.342 ************************************ 00:29:42.342 END TEST bdev_bounds 00:29:42.342 ************************************ 00:29:42.342 01:12:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:42.342 01:12:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.342 01:12:16 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:42.342 01:12:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:42.342 01:12:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:42.342 01:12:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.342 ************************************ 00:29:42.342 START TEST bdev_nbd 00:29:42.342 ************************************ 00:29:42.342 01:12:16 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:42.342 01:12:16 -- bdev/blockdev.sh@298 -- # uname -s 00:29:42.342 01:12:16 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:42.342 01:12:16 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:42.342 01:12:16 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:42.342 01:12:16 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:29:42.342 01:12:16 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:42.342 01:12:16 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:42.342 01:12:16 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:42.342 01:12:16 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:42.342 01:12:16 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:42.342 01:12:16 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:42.342 01:12:16 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:42.342 01:12:16 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:42.342 01:12:16 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:29:42.342 01:12:16 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:42.342 01:12:16 -- bdev/blockdev.sh@316 -- # nbd_pid=150556 00:29:42.342 01:12:16 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:42.342 01:12:16 -- bdev/blockdev.sh@318 -- # waitforlisten 150556 /var/tmp/spdk-nbd.sock 00:29:42.342 01:12:16 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:42.342 01:12:16 -- common/autotest_common.sh@829 -- # '[' -z 150556 ']' 00:29:42.342 01:12:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:42.342 01:12:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.342 01:12:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:42.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:42.342 01:12:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.342 01:12:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.342 [2024-11-18 01:12:16.690222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:42.342 [2024-11-18 01:12:16.690418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.601 [2024-11-18 01:12:16.837833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.601 [2024-11-18 01:12:16.907754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.537 01:12:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.537 01:12:17 -- common/autotest_common.sh@862 -- # return 0 00:29:43.537 01:12:17 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@24 -- # local i 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:43.537 01:12:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:43.537 01:12:17 -- common/autotest_common.sh@867 -- # local i 00:29:43.537 01:12:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:43.537 01:12:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:43.537 01:12:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:43.537 01:12:17 -- common/autotest_common.sh@871 -- # break 00:29:43.537 01:12:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:43.537 01:12:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:43.537 01:12:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:43.537 1+0 records in 00:29:43.537 1+0 records out 00:29:43.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634096 s, 6.5 MB/s 00:29:43.537 01:12:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.537 01:12:17 -- common/autotest_common.sh@884 -- # size=4096 00:29:43.537 01:12:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.537 01:12:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:43.537 01:12:17 -- common/autotest_common.sh@887 -- # return 0 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:43.537 01:12:17 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:43.538 01:12:17 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:43.796 01:12:18 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:43.796 { 00:29:43.796 "nbd_device": "/dev/nbd0", 00:29:43.796 "bdev_name": "raid5f" 00:29:43.796 } 00:29:43.796 ]' 00:29:43.796 01:12:18 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:43.796 01:12:18 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:43.796 { 00:29:43.796 "nbd_device": "/dev/nbd0", 00:29:43.796 "bdev_name": "raid5f" 00:29:43.796 } 00:29:43.796 ]' 00:29:43.796 01:12:18 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@51 -- # local i 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@41 -- # break 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@45 -- # return 0 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:44.055 01:12:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@65 -- # true 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@65 -- # count=0 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@122 -- # count=0 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@127 -- # return 0 00:29:44.314 01:12:18 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@12 -- # local i 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.314 01:12:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:29:44.573 /dev/nbd0 00:29:44.573 01:12:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:44.573 01:12:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:44.573 01:12:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:44.573 01:12:18 -- common/autotest_common.sh@867 -- # local i 00:29:44.573 01:12:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:44.573 01:12:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:44.573 01:12:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:44.573 01:12:18 -- common/autotest_common.sh@871 -- # break 00:29:44.573 01:12:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:44.573 01:12:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:44.573 01:12:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.573 1+0 records in 00:29:44.573 1+0 records out 00:29:44.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274278 s, 14.9 MB/s 00:29:44.573 01:12:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.573 01:12:18 -- common/autotest_common.sh@884 -- # size=4096 00:29:44.573 01:12:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.573 01:12:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:44.573 01:12:18 -- common/autotest_common.sh@887 -- # return 0 00:29:44.573 01:12:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:44.573 01:12:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.573 01:12:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:44.573 01:12:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:44.573 01:12:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd0", 00:29:44.832 "bdev_name": "raid5f" 00:29:44.832 } 00:29:44.832 ]' 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd0", 00:29:44.832 "bdev_name": "raid5f" 00:29:44.832 } 00:29:44.832 ]' 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@65 -- # count=1 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@95 -- # count=1 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:44.832 256+0 records in 00:29:44.832 256+0 records out 00:29:44.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127154 s, 82.5 MB/s 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:44.832 01:12:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:45.090 256+0 records in 00:29:45.090 256+0 records out 00:29:45.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256508 s, 40.9 MB/s 00:29:45.090 01:12:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@51 -- # local i 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.091 01:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@41 -- # break 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:45.350 01:12:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@65 -- # true 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@65 -- # count=0 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@104 -- # count=0 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@109 -- # return 0 00:29:45.609 01:12:19 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:45.609 01:12:19 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:45.869 malloc_lvol_verify 00:29:45.869 01:12:20 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:46.127 451e6a4c-68b2-44a1-8263-2ac88534368f 00:29:46.127 01:12:20 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:46.127 52f30cca-1ee5-47a7-b9b0-164436c574e2 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:46.386 /dev/nbd0 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:46.386 mke2fs 1.46.5 (30-Dec-2021) 00:29:46.386 00:29:46.386 Filesystem too small for a journal 00:29:46.386 Discarding device blocks: 0/1024 done 00:29:46.386 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:46.386 00:29:46.386 Allocating group tables: 0/1 done 00:29:46.386 Writing inode tables: 0/1 done 00:29:46.386 Writing superblocks and filesystem accounting information: 0/1 done 00:29:46.386 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@51 -- # local i 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:46.386 01:12:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:46.645 01:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@41 -- # break 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:46.646 01:12:20 -- bdev/nbd_common.sh@147 -- # return 0 00:29:46.646 01:12:20 -- bdev/blockdev.sh@324 -- # killprocess 150556 00:29:46.646 01:12:20 -- common/autotest_common.sh@936 -- # '[' -z 150556 ']' 00:29:46.646 01:12:20 -- common/autotest_common.sh@940 -- # kill -0 150556 00:29:46.646 01:12:20 -- common/autotest_common.sh@941 -- # uname 00:29:46.646 01:12:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:46.646 01:12:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150556 00:29:46.646 01:12:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:46.646 01:12:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:46.646 01:12:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150556' 00:29:46.646 killing process with pid 150556 00:29:46.646 01:12:20 -- common/autotest_common.sh@955 -- # kill 150556 00:29:46.646 01:12:20 -- common/autotest_common.sh@960 -- # wait 150556 00:29:46.905 01:12:21 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:46.905 00:29:46.905 real 0m4.602s 00:29:46.905 user 0m6.598s 00:29:46.905 sys 0m1.433s 00:29:46.905 01:12:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:46.905 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:29:46.905 ************************************ 00:29:46.905 END TEST bdev_nbd 00:29:46.905 ************************************ 00:29:46.905 01:12:21 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:46.905 01:12:21 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:29:46.905 01:12:21 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:29:46.905 01:12:21 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:29:46.905 01:12:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:46.905 01:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:46.905 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:29:46.905 ************************************ 00:29:46.905 START TEST bdev_fio 00:29:46.905 ************************************ 00:29:46.905 01:12:21 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:29:46.905 01:12:21 -- bdev/blockdev.sh@329 -- # local env_context 00:29:46.905 01:12:21 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:46.905 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:46.905 01:12:21 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:47.165 01:12:21 -- bdev/blockdev.sh@337 -- # echo '' 00:29:47.165 01:12:21 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:29:47.165 01:12:21 -- bdev/blockdev.sh@337 -- # env_context= 00:29:47.165 01:12:21 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:47.165 01:12:21 -- common/autotest_common.sh@1270 -- # local workload=verify 00:29:47.165 01:12:21 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:29:47.165 01:12:21 -- common/autotest_common.sh@1272 -- # local env_context= 00:29:47.165 01:12:21 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:29:47.165 01:12:21 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:47.165 01:12:21 -- common/autotest_common.sh@1290 -- # cat 00:29:47.165 01:12:21 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1303 -- # cat 00:29:47.165 01:12:21 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:29:47.165 01:12:21 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:47.165 01:12:21 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:29:47.165 01:12:21 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:29:47.165 01:12:21 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:29:47.165 01:12:21 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:29:47.165 01:12:21 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:47.165 01:12:21 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:47.165 01:12:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:47.165 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:29:47.165 ************************************ 00:29:47.165 START TEST bdev_fio_rw_verify 00:29:47.165 ************************************ 00:29:47.165 01:12:21 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:47.165 01:12:21 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:47.165 01:12:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:47.165 01:12:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:47.165 01:12:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:47.165 01:12:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.165 01:12:21 -- common/autotest_common.sh@1330 -- # shift 00:29:47.165 01:12:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:47.165 01:12:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.165 01:12:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.165 01:12:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:47.165 01:12:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:29:47.165 01:12:21 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:29:47.165 01:12:21 -- common/autotest_common.sh@1336 -- # break 00:29:47.165 01:12:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:47.165 01:12:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:47.424 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:47.424 fio-3.35 00:29:47.424 Starting 1 thread 00:29:59.660 00:29:59.660 job_raid5f: (groupid=0, jobs=1): err= 0: pid=150777: Mon Nov 18 01:12:32 2024 00:29:59.660 read: IOPS=12.9k, BW=50.2MiB/s (52.6MB/s)(502MiB/10001msec) 00:29:59.660 slat (usec): min=16, max=106, avg=18.60, stdev= 2.16 00:29:59.660 clat (usec): min=10, max=335, avg=125.45, stdev=44.61 00:29:59.660 lat (usec): min=28, max=366, avg=144.06, stdev=45.05 00:29:59.660 clat percentiles (usec): 00:29:59.660 | 50.000th=[ 130], 99.000th=[ 212], 99.900th=[ 306], 99.990th=[ 322], 00:29:59.660 | 99.999th=[ 330] 00:29:59.660 write: IOPS=13.5k, BW=52.6MiB/s (55.1MB/s)(519MiB/9880msec); 0 zone resets 00:29:59.660 slat (usec): min=7, max=233, avg=15.71, stdev= 3.05 00:29:59.660 clat (usec): min=55, max=3047, avg=284.02, stdev=46.22 00:29:59.660 lat (usec): min=70, max=3095, avg=299.72, stdev=47.42 00:29:59.660 clat percentiles (usec): 00:29:59.660 | 50.000th=[ 289], 99.000th=[ 412], 99.900th=[ 553], 99.990th=[ 1516], 00:29:59.660 | 99.999th=[ 2999] 00:29:59.660 bw ( KiB/s): min=49624, max=57064, per=98.73%, avg=53130.11, stdev=2093.10, samples=19 00:29:59.660 iops : min=12406, max=14266, avg=13282.53, stdev=523.27, samples=19 00:29:59.660 lat (usec) : 20=0.01%, 50=0.01%, 100=17.21%, 250=42.72%, 500=39.88% 00:29:59.660 lat (usec) : 750=0.18%, 1000=0.01% 00:29:59.660 lat (msec) : 2=0.01%, 4=0.01% 00:29:59.660 cpu : usr=99.59%, sys=0.37%, ctx=90, majf=0, minf=12202 00:29:59.660 IO depths : 1=7.6%, 2=20.0%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.660 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.660 issued rwts: total=128523,132918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:59.660 00:29:59.660 Run status group 0 (all jobs): 00:29:59.660 READ: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=502MiB (526MB), run=10001-10001msec 00:29:59.660 WRITE: bw=52.6MiB/s (55.1MB/s), 52.6MiB/s-52.6MiB/s (55.1MB/s-55.1MB/s), io=519MiB (544MB), run=9880-9880msec 00:29:59.660 ----------------------------------------------------- 00:29:59.660 Suppressions used: 00:29:59.660 count bytes template 00:29:59.660 1 7 /usr/src/fio/parse.c 00:29:59.660 409 39264 /usr/src/fio/iolog.c 00:29:59.660 1 904 libcrypto.so 00:29:59.660 ----------------------------------------------------- 00:29:59.660 00:29:59.660 ************************************ 00:29:59.660 END TEST bdev_fio_rw_verify 00:29:59.660 ************************************ 00:29:59.660 00:29:59.660 real 0m11.263s 00:29:59.660 user 0m11.951s 00:29:59.660 sys 0m0.773s 00:29:59.660 01:12:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:59.660 01:12:32 -- common/autotest_common.sh@10 -- # set +x 00:29:59.660 01:12:32 -- bdev/blockdev.sh@348 -- # rm -f 00:29:59.660 01:12:32 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:59.660 01:12:32 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:29:59.660 01:12:32 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:59.660 01:12:32 -- common/autotest_common.sh@1270 -- # local workload=trim 00:29:59.660 01:12:32 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:29:59.660 01:12:32 -- common/autotest_common.sh@1272 -- # local env_context= 00:29:59.661 01:12:32 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:29:59.661 01:12:32 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:59.661 01:12:32 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:29:59.661 01:12:32 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:29:59.661 01:12:32 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:59.661 01:12:32 -- common/autotest_common.sh@1290 -- # cat 00:29:59.661 01:12:32 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:29:59.661 01:12:32 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:29:59.661 01:12:32 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:29:59.661 01:12:32 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "840daeb7-2293-4db1-a2ef-debbd68cfe87"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "840daeb7-2293-4db1-a2ef-debbd68cfe87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "840daeb7-2293-4db1-a2ef-debbd68cfe87",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1715e25a-7408-4162-92bf-1ae9b19a612a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fbf16805-ee16-4cf0-b64b-4088bcde3b4f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fda93a61-b7f1-4938-9200-71e1bb0619c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:59.661 01:12:32 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:59.661 01:12:32 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:29:59.661 01:12:32 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:59.661 /home/vagrant/spdk_repo/spdk 00:29:59.661 01:12:32 -- bdev/blockdev.sh@360 -- # popd 00:29:59.661 01:12:32 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:29:59.661 01:12:32 -- bdev/blockdev.sh@362 -- # return 0 00:29:59.661 00:29:59.661 real 0m11.485s 00:29:59.661 user 0m12.076s 00:29:59.661 sys 0m0.870s 00:29:59.661 01:12:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:59.661 ************************************ 00:29:59.661 END TEST bdev_fio 00:29:59.661 ************************************ 00:29:59.661 01:12:32 -- common/autotest_common.sh@10 -- # set +x 00:29:59.661 01:12:32 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:59.661 01:12:32 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:59.661 01:12:32 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:29:59.661 01:12:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:59.661 01:12:32 -- common/autotest_common.sh@10 -- # set +x 00:29:59.661 ************************************ 00:29:59.661 START TEST bdev_verify 00:29:59.661 ************************************ 00:29:59.661 01:12:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:59.661 [2024-11-18 01:12:32.943580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:59.661 [2024-11-18 01:12:32.943830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150935 ] 00:29:59.661 [2024-11-18 01:12:33.101802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:59.661 [2024-11-18 01:12:33.152193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.661 [2024-11-18 01:12:33.152197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.661 Running I/O for 5 seconds... 00:30:04.933 00:30:04.933 Latency(us) 00:30:04.933 [2024-11-18T01:12:39.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.933 [2024-11-18T01:12:39.332Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:04.933 Verification LBA range: start 0x0 length 0x2000 00:30:04.933 raid5f : 5.02 7592.18 29.66 0.00 0.00 26734.36 203.82 26838.55 00:30:04.933 [2024-11-18T01:12:39.332Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:04.933 Verification LBA range: start 0x2000 length 0x2000 00:30:04.933 raid5f : 5.01 9663.54 37.75 0.00 0.00 20989.75 181.39 16477.62 00:30:04.933 [2024-11-18T01:12:39.332Z] =================================================================================================================== 00:30:04.933 [2024-11-18T01:12:39.332Z] Total : 17255.72 67.41 0.00 0.00 23517.92 181.39 26838.55 00:30:04.933 00:30:04.933 real 0m5.960s 00:30:04.933 user 0m11.071s 00:30:04.933 sys 0m0.229s 00:30:04.933 01:12:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:04.933 01:12:38 -- common/autotest_common.sh@10 -- # set +x 00:30:04.933 ************************************ 00:30:04.933 END TEST bdev_verify 00:30:04.933 ************************************ 00:30:04.933 01:12:38 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:04.933 01:12:38 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:30:04.933 01:12:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.933 01:12:38 -- common/autotest_common.sh@10 -- # set +x 00:30:04.933 ************************************ 00:30:04.933 START TEST bdev_verify_big_io 00:30:04.933 ************************************ 00:30:04.933 01:12:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:04.933 [2024-11-18 01:12:38.956707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:04.933 [2024-11-18 01:12:38.957525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151029 ] 00:30:04.933 [2024-11-18 01:12:39.102303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.933 [2024-11-18 01:12:39.180767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.933 [2024-11-18 01:12:39.180776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.191 Running I/O for 5 seconds... 00:30:10.466 00:30:10.466 Latency(us) 00:30:10.466 [2024-11-18T01:12:44.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.466 [2024-11-18T01:12:44.865Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:10.466 Verification LBA range: start 0x0 length 0x200 00:30:10.466 raid5f : 5.18 533.26 33.33 0.00 0.00 6228945.56 195.05 191739.61 00:30:10.466 [2024-11-18T01:12:44.865Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:10.466 Verification LBA range: start 0x200 length 0x200 00:30:10.466 raid5f : 5.16 660.32 41.27 0.00 0.00 5045647.95 148.24 151793.86 00:30:10.466 [2024-11-18T01:12:44.865Z] =================================================================================================================== 00:30:10.466 [2024-11-18T01:12:44.865Z] Total : 1193.58 74.60 0.00 0.00 5575482.70 148.24 191739.61 00:30:10.726 00:30:10.726 real 0m6.174s 00:30:10.726 user 0m11.417s 00:30:10.726 sys 0m0.325s 00:30:10.726 01:12:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:10.726 01:12:45 -- common/autotest_common.sh@10 -- # set +x 00:30:10.726 ************************************ 00:30:10.726 END TEST bdev_verify_big_io 00:30:10.726 ************************************ 00:30:10.985 01:12:45 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:10.985 01:12:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:10.985 01:12:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:10.985 01:12:45 -- common/autotest_common.sh@10 -- # set +x 00:30:10.985 ************************************ 00:30:10.985 START TEST bdev_write_zeroes 00:30:10.985 ************************************ 00:30:10.985 01:12:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:10.985 [2024-11-18 01:12:45.202814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:10.985 [2024-11-18 01:12:45.203028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151125 ] 00:30:10.985 [2024-11-18 01:12:45.346613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.244 [2024-11-18 01:12:45.421128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.503 Running I/O for 1 seconds... 00:30:12.441 00:30:12.441 Latency(us) 00:30:12.441 [2024-11-18T01:12:46.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.442 [2024-11-18T01:12:46.841Z] Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:12.442 raid5f : 1.00 30034.79 117.32 0.00 0.00 4250.56 1256.11 5336.50 00:30:12.442 [2024-11-18T01:12:46.841Z] =================================================================================================================== 00:30:12.442 [2024-11-18T01:12:46.841Z] Total : 30034.79 117.32 0.00 0.00 4250.56 1256.11 5336.50 00:30:13.010 00:30:13.010 real 0m2.014s 00:30:13.010 user 0m1.591s 00:30:13.010 sys 0m0.308s 00:30:13.010 01:12:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:13.010 ************************************ 00:30:13.010 END TEST bdev_write_zeroes 00:30:13.010 01:12:47 -- common/autotest_common.sh@10 -- # set +x 00:30:13.010 ************************************ 00:30:13.010 01:12:47 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:13.010 01:12:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:13.010 01:12:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.010 01:12:47 -- common/autotest_common.sh@10 -- # set +x 00:30:13.010 ************************************ 00:30:13.010 START TEST bdev_json_nonenclosed 00:30:13.010 ************************************ 00:30:13.010 01:12:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:13.010 [2024-11-18 01:12:47.308987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:13.010 [2024-11-18 01:12:47.309248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151175 ] 00:30:13.269 [2024-11-18 01:12:47.465859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.269 [2024-11-18 01:12:47.550952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.269 [2024-11-18 01:12:47.551245] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:13.269 [2024-11-18 01:12:47.551308] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:13.528 00:30:13.528 real 0m0.512s 00:30:13.528 user 0m0.238s 00:30:13.528 sys 0m0.175s 00:30:13.528 01:12:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:13.528 01:12:47 -- common/autotest_common.sh@10 -- # set +x 00:30:13.528 ************************************ 00:30:13.528 END TEST bdev_json_nonenclosed 00:30:13.528 ************************************ 00:30:13.528 01:12:47 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:13.528 01:12:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:13.528 01:12:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.528 01:12:47 -- common/autotest_common.sh@10 -- # set +x 00:30:13.528 ************************************ 00:30:13.528 START TEST bdev_json_nonarray 00:30:13.528 ************************************ 00:30:13.528 01:12:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:13.528 [2024-11-18 01:12:47.890304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:13.528 [2024-11-18 01:12:47.890755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151214 ] 00:30:13.787 [2024-11-18 01:12:48.046287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.787 [2024-11-18 01:12:48.121737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.787 [2024-11-18 01:12:48.121982] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:13.787 [2024-11-18 01:12:48.122027] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:14.047 00:30:14.047 real 0m0.500s 00:30:14.047 user 0m0.244s 00:30:14.047 sys 0m0.152s 00:30:14.047 01:12:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:14.047 01:12:48 -- common/autotest_common.sh@10 -- # set +x 00:30:14.047 ************************************ 00:30:14.047 END TEST bdev_json_nonarray 00:30:14.047 ************************************ 00:30:14.047 01:12:48 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:30:14.047 01:12:48 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:30:14.047 01:12:48 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:30:14.047 01:12:48 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:14.047 01:12:48 -- bdev/blockdev.sh@809 -- # cleanup 00:30:14.047 01:12:48 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:14.047 01:12:48 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:14.047 01:12:48 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:30:14.047 01:12:48 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:30:14.047 01:12:48 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:30:14.047 01:12:48 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:30:14.047 00:30:14.047 real 0m36.841s 00:30:14.047 user 0m49.920s 00:30:14.047 sys 0m5.460s 00:30:14.047 01:12:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:14.047 ************************************ 00:30:14.047 END TEST blockdev_raid5f 00:30:14.047 01:12:48 -- common/autotest_common.sh@10 -- # set +x 00:30:14.047 ************************************ 00:30:14.047 01:12:48 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:14.047 01:12:48 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:30:14.047 01:12:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:14.047 01:12:48 -- common/autotest_common.sh@10 -- # set +x 00:30:14.047 01:12:48 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:30:14.047 01:12:48 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:30:14.047 01:12:48 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:30:14.047 01:12:48 -- common/autotest_common.sh@10 -- # set +x 00:30:16.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:16.598 Waiting for block devices as requested 00:30:16.598 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:17.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:17.166 Cleaning 00:30:17.166 Removing: /var/run/dpdk/spdk0/config 00:30:17.166 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:17.166 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:17.166 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:17.166 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:17.166 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:17.166 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:17.166 Removing: /dev/shm/spdk_tgt_trace.pid115012 00:30:17.166 Removing: /var/run/dpdk/spdk0 00:30:17.166 Removing: /var/run/dpdk/spdk_pid114812 00:30:17.166 Removing: /var/run/dpdk/spdk_pid115012 00:30:17.166 Removing: /var/run/dpdk/spdk_pid115305 00:30:17.166 Removing: /var/run/dpdk/spdk_pid115549 00:30:17.166 Removing: /var/run/dpdk/spdk_pid115727 00:30:17.166 Removing: /var/run/dpdk/spdk_pid115818 00:30:17.166 Removing: /var/run/dpdk/spdk_pid115911 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116028 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116124 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116165 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116217 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116289 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116412 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116937 00:30:17.166 Removing: /var/run/dpdk/spdk_pid116998 00:30:17.166 Removing: /var/run/dpdk/spdk_pid117061 00:30:17.166 Removing: /var/run/dpdk/spdk_pid117076 00:30:17.166 Removing: /var/run/dpdk/spdk_pid117155 00:30:17.166 Removing: /var/run/dpdk/spdk_pid117176 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117273 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117294 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117339 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117362 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117419 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117442 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117606 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117650 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117686 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117779 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117842 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117881 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117962 00:30:17.425 Removing: /var/run/dpdk/spdk_pid117987 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118032 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118066 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118100 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118135 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118175 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118205 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118245 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118280 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118325 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118348 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118393 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118425 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118463 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118493 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118538 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118566 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118607 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118643 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118676 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118712 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118752 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118780 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118820 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118857 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118895 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118925 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118970 00:30:17.425 Removing: /var/run/dpdk/spdk_pid118993 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119038 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119071 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119108 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119145 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119189 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119222 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119263 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119300 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119340 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119368 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119415 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119501 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119615 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119794 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119860 00:30:17.426 Removing: /var/run/dpdk/spdk_pid119898 00:30:17.426 Removing: /var/run/dpdk/spdk_pid121082 00:30:17.426 Removing: /var/run/dpdk/spdk_pid121291 00:30:17.426 Removing: /var/run/dpdk/spdk_pid121480 00:30:17.684 Removing: /var/run/dpdk/spdk_pid121588 00:30:17.684 Removing: /var/run/dpdk/spdk_pid121699 00:30:17.684 Removing: /var/run/dpdk/spdk_pid121764 00:30:17.684 Removing: /var/run/dpdk/spdk_pid121786 00:30:17.684 Removing: /var/run/dpdk/spdk_pid121824 00:30:17.684 Removing: /var/run/dpdk/spdk_pid122289 00:30:17.684 Removing: /var/run/dpdk/spdk_pid122376 00:30:17.684 Removing: /var/run/dpdk/spdk_pid122479 00:30:17.684 Removing: /var/run/dpdk/spdk_pid122525 00:30:17.685 Removing: /var/run/dpdk/spdk_pid123681 00:30:17.685 Removing: /var/run/dpdk/spdk_pid124546 00:30:17.685 Removing: /var/run/dpdk/spdk_pid125407 00:30:17.685 Removing: /var/run/dpdk/spdk_pid126504 00:30:17.685 Removing: /var/run/dpdk/spdk_pid127548 00:30:17.685 Removing: /var/run/dpdk/spdk_pid128596 00:30:17.685 Removing: /var/run/dpdk/spdk_pid130038 00:30:17.685 Removing: /var/run/dpdk/spdk_pid131221 00:30:17.685 Removing: /var/run/dpdk/spdk_pid132394 00:30:17.685 Removing: /var/run/dpdk/spdk_pid133050 00:30:17.685 Removing: /var/run/dpdk/spdk_pid133576 00:30:17.685 Removing: /var/run/dpdk/spdk_pid134184 00:30:17.685 Removing: /var/run/dpdk/spdk_pid134655 00:30:17.685 Removing: /var/run/dpdk/spdk_pid135210 00:30:17.685 Removing: /var/run/dpdk/spdk_pid135741 00:30:17.685 Removing: /var/run/dpdk/spdk_pid136361 00:30:17.685 Removing: /var/run/dpdk/spdk_pid136861 00:30:17.685 Removing: /var/run/dpdk/spdk_pid138199 00:30:17.685 Removing: /var/run/dpdk/spdk_pid138781 00:30:17.685 Removing: /var/run/dpdk/spdk_pid139310 00:30:17.685 Removing: /var/run/dpdk/spdk_pid140776 00:30:17.685 Removing: /var/run/dpdk/spdk_pid141423 00:30:17.685 Removing: /var/run/dpdk/spdk_pid142020 00:30:17.685 Removing: /var/run/dpdk/spdk_pid142773 00:30:17.685 Removing: /var/run/dpdk/spdk_pid142818 00:30:17.685 Removing: /var/run/dpdk/spdk_pid142862 00:30:17.685 Removing: /var/run/dpdk/spdk_pid142908 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143045 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143190 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143428 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143727 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143751 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143796 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143814 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143837 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143857 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143877 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143898 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143925 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143937 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143954 00:30:17.685 Removing: /var/run/dpdk/spdk_pid143985 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144001 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144024 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144044 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144064 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144084 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144105 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144120 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144141 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144181 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144205 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144239 00:30:17.685 Removing: /var/run/dpdk/spdk_pid144315 00:30:17.943 Removing: /var/run/dpdk/spdk_pid144355 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144367 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144408 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144423 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144438 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144490 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144509 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144538 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144552 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144572 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144591 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144607 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144620 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144630 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144646 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144680 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144721 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144741 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144775 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144796 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144803 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144858 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144877 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144915 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144930 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144947 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144956 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144969 00:30:17.944 Removing: /var/run/dpdk/spdk_pid144986 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145002 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145015 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145105 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145168 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145287 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145310 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145357 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145405 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145431 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145453 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145482 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145519 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145541 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145623 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145676 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145725 00:30:17.944 Removing: /var/run/dpdk/spdk_pid145990 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146114 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146149 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146242 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146315 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146347 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146587 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146774 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146877 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146928 00:30:17.944 Removing: /var/run/dpdk/spdk_pid146951 00:30:17.944 Removing: /var/run/dpdk/spdk_pid147042 00:30:17.944 Removing: /var/run/dpdk/spdk_pid147466 00:30:17.944 Removing: /var/run/dpdk/spdk_pid147504 00:30:17.944 Removing: /var/run/dpdk/spdk_pid147801 00:30:17.944 Removing: /var/run/dpdk/spdk_pid147915 00:30:17.944 Removing: /var/run/dpdk/spdk_pid148009 00:30:17.944 Removing: /var/run/dpdk/spdk_pid148062 00:30:18.203 Removing: /var/run/dpdk/spdk_pid148094 00:30:18.203 Removing: /var/run/dpdk/spdk_pid148116 00:30:18.203 Removing: /var/run/dpdk/spdk_pid149483 00:30:18.203 Removing: /var/run/dpdk/spdk_pid149611 00:30:18.203 Removing: /var/run/dpdk/spdk_pid149616 00:30:18.203 Removing: /var/run/dpdk/spdk_pid149642 00:30:18.203 Removing: /var/run/dpdk/spdk_pid150155 00:30:18.203 Removing: /var/run/dpdk/spdk_pid150262 00:30:18.203 Removing: /var/run/dpdk/spdk_pid150409 00:30:18.203 Removing: /var/run/dpdk/spdk_pid150455 00:30:18.203 Removing: /var/run/dpdk/spdk_pid150493 00:30:18.203 Removing: /var/run/dpdk/spdk_pid150757 00:30:18.203 Removing: /var/run/dpdk/spdk_pid150935 00:30:18.203 Removing: /var/run/dpdk/spdk_pid151029 00:30:18.203 Removing: /var/run/dpdk/spdk_pid151125 00:30:18.203 Removing: /var/run/dpdk/spdk_pid151175 00:30:18.203 Removing: /var/run/dpdk/spdk_pid151214 00:30:18.203 Clean 00:30:18.203 killing process with pid 105005 00:30:18.203 killing process with pid 105006 00:30:18.203 01:12:52 -- common/autotest_common.sh@1446 -- # return 0 00:30:18.462 01:12:52 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:18.462 01:12:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.462 01:12:52 -- common/autotest_common.sh@10 -- # set +x 00:30:18.462 01:12:52 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:18.462 01:12:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.462 01:12:52 -- common/autotest_common.sh@10 -- # set +x 00:30:18.462 01:12:52 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:18.462 01:12:52 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:18.462 01:12:52 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:18.462 01:12:52 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:18.462 01:12:52 -- spdk/autotest.sh@383 -- # hostname 00:30:18.462 01:12:52 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:18.721 geninfo: WARNING: invalid characters removed from testname! 00:30:57.439 01:13:30 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:00.727 01:13:34 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:03.261 01:13:37 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:06.549 01:13:40 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:09.084 01:13:43 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:11.618 01:13:45 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:14.228 01:13:48 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:14.228 01:13:48 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:31:14.228 01:13:48 -- common/autotest_common.sh@1690 -- $ lcov --version 00:31:14.228 01:13:48 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:31:14.228 01:13:48 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:31:14.228 01:13:48 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:31:14.228 01:13:48 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:31:14.228 01:13:48 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:31:14.228 01:13:48 -- scripts/common.sh@335 -- $ IFS=.-: 00:31:14.228 01:13:48 -- scripts/common.sh@335 -- $ read -ra ver1 00:31:14.228 01:13:48 -- scripts/common.sh@336 -- $ IFS=.-: 00:31:14.228 01:13:48 -- scripts/common.sh@336 -- $ read -ra ver2 00:31:14.228 01:13:48 -- scripts/common.sh@337 -- $ local 'op=<' 00:31:14.228 01:13:48 -- scripts/common.sh@339 -- $ ver1_l=2 00:31:14.228 01:13:48 -- scripts/common.sh@340 -- $ ver2_l=1 00:31:14.228 01:13:48 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:31:14.228 01:13:48 -- scripts/common.sh@343 -- $ case "$op" in 00:31:14.228 01:13:48 -- scripts/common.sh@344 -- $ : 1 00:31:14.228 01:13:48 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:31:14.228 01:13:48 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.228 01:13:48 -- scripts/common.sh@364 -- $ decimal 1 00:31:14.228 01:13:48 -- scripts/common.sh@352 -- $ local d=1 00:31:14.228 01:13:48 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:31:14.228 01:13:48 -- scripts/common.sh@354 -- $ echo 1 00:31:14.228 01:13:48 -- scripts/common.sh@364 -- $ ver1[v]=1 00:31:14.228 01:13:48 -- scripts/common.sh@365 -- $ decimal 2 00:31:14.228 01:13:48 -- scripts/common.sh@352 -- $ local d=2 00:31:14.228 01:13:48 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:31:14.228 01:13:48 -- scripts/common.sh@354 -- $ echo 2 00:31:14.228 01:13:48 -- scripts/common.sh@365 -- $ ver2[v]=2 00:31:14.228 01:13:48 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:31:14.228 01:13:48 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:31:14.228 01:13:48 -- scripts/common.sh@367 -- $ return 0 00:31:14.228 01:13:48 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.228 01:13:48 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:31:14.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.228 --rc genhtml_branch_coverage=1 00:31:14.228 --rc genhtml_function_coverage=1 00:31:14.228 --rc genhtml_legend=1 00:31:14.228 --rc geninfo_all_blocks=1 00:31:14.228 --rc geninfo_unexecuted_blocks=1 00:31:14.228 00:31:14.228 ' 00:31:14.228 01:13:48 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:31:14.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.228 --rc genhtml_branch_coverage=1 00:31:14.228 --rc genhtml_function_coverage=1 00:31:14.228 --rc genhtml_legend=1 00:31:14.228 --rc geninfo_all_blocks=1 00:31:14.228 --rc geninfo_unexecuted_blocks=1 00:31:14.228 00:31:14.228 ' 00:31:14.228 01:13:48 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:31:14.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.228 --rc genhtml_branch_coverage=1 00:31:14.228 --rc genhtml_function_coverage=1 00:31:14.228 --rc genhtml_legend=1 00:31:14.228 --rc geninfo_all_blocks=1 00:31:14.228 --rc geninfo_unexecuted_blocks=1 00:31:14.228 00:31:14.228 ' 00:31:14.228 01:13:48 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:31:14.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.228 --rc genhtml_branch_coverage=1 00:31:14.228 --rc genhtml_function_coverage=1 00:31:14.228 --rc genhtml_legend=1 00:31:14.228 --rc geninfo_all_blocks=1 00:31:14.228 --rc geninfo_unexecuted_blocks=1 00:31:14.228 00:31:14.228 ' 00:31:14.228 01:13:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:14.228 01:13:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:14.228 01:13:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.228 01:13:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.228 01:13:48 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:14.228 01:13:48 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:14.228 01:13:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:14.228 01:13:48 -- paths/export.sh@5 -- $ export PATH 00:31:14.228 01:13:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:14.228 01:13:48 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:14.228 01:13:48 -- common/autobuild_common.sh@440 -- $ date +%s 00:31:14.228 01:13:48 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731892428.XXXXXX 00:31:14.228 01:13:48 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731892428.Nm0xHD 00:31:14.228 01:13:48 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:31:14.228 01:13:48 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:31:14.228 01:13:48 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:31:14.228 01:13:48 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:31:14.228 01:13:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:14.228 01:13:48 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:14.228 01:13:48 -- common/autobuild_common.sh@456 -- $ get_config_params 00:31:14.228 01:13:48 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:14.228 01:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:31:14.229 01:13:48 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:31:14.229 01:13:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:14.229 01:13:48 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:14.229 01:13:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:14.229 01:13:48 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:14.229 01:13:48 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:14.229 01:13:48 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:31:14.229 01:13:48 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:31:14.229 01:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:31:14.229 01:13:48 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:31:14.229 01:13:48 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:31:14.229 01:13:48 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:31:14.229 01:13:48 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:31:14.229 01:13:48 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:14.229 01:13:48 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:14.229 01:13:48 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:31:14.229 01:13:48 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:31:14.229 01:13:48 -- spdk/autopackage.sh@40 -- $ get_config_params 00:31:14.229 01:13:48 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:31:14.229 01:13:48 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:14.229 01:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:31:14.229 01:13:48 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:31:14.229 01:13:48 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:31:14.488 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:31:14.488 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:31:14.488 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:31:14.488 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:31:14.748 Using 'verbs' RDMA provider 00:31:30.573 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:31:42.792 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:31:43.361 Creating mk/config.mk...done. 00:31:43.362 Creating mk/cc.flags.mk...done. 00:31:43.362 Type 'make' to build. 00:31:43.362 01:14:17 -- spdk/autopackage.sh@43 -- $ make -j10 00:31:43.362 make[1]: Nothing to be done for 'all'. 00:31:43.931 CC lib/ut_mock/mock.o 00:31:43.931 CC lib/log/log.o 00:31:43.931 CC lib/log/log_deprecated.o 00:31:43.931 CC lib/log/log_flags.o 00:31:43.931 CC lib/ut/ut.o 00:31:43.931 LIB libspdk_ut_mock.a 00:31:43.931 LIB libspdk_ut.a 00:31:43.931 LIB libspdk_log.a 00:31:44.191 CC lib/util/base64.o 00:31:44.191 CC lib/util/bit_array.o 00:31:44.191 CC lib/ioat/ioat.o 00:31:44.191 CC lib/dma/dma.o 00:31:44.191 CC lib/util/crc32.o 00:31:44.191 CC lib/util/cpuset.o 00:31:44.191 CC lib/util/crc32c.o 00:31:44.191 CC lib/util/crc16.o 00:31:44.191 CXX lib/trace_parser/trace.o 00:31:44.191 CC lib/vfio_user/host/vfio_user_pci.o 00:31:44.450 CC lib/util/crc32_ieee.o 00:31:44.450 CC lib/util/crc64.o 00:31:44.450 CC lib/util/dif.o 00:31:44.450 CC lib/vfio_user/host/vfio_user.o 00:31:44.450 LIB libspdk_dma.a 00:31:44.450 CC lib/util/fd.o 00:31:44.450 LIB libspdk_ioat.a 00:31:44.450 CC lib/util/file.o 00:31:44.450 CC lib/util/hexlify.o 00:31:44.450 CC lib/util/iov.o 00:31:44.450 CC lib/util/math.o 00:31:44.450 CC lib/util/pipe.o 00:31:44.450 CC lib/util/strerror_tls.o 00:31:44.450 LIB libspdk_vfio_user.a 00:31:44.450 CC lib/util/string.o 00:31:44.450 CC lib/util/uuid.o 00:31:44.450 CC lib/util/fd_group.o 00:31:44.709 CC lib/util/xor.o 00:31:44.709 CC lib/util/zipf.o 00:31:44.709 LIB libspdk_util.a 00:31:44.966 LIB libspdk_trace_parser.a 00:31:44.966 CC lib/rdma/common.o 00:31:44.966 CC lib/rdma/rdma_verbs.o 00:31:44.966 CC lib/json/json_parse.o 00:31:44.966 CC lib/json/json_util.o 00:31:44.966 CC lib/vmd/vmd.o 00:31:44.966 CC lib/json/json_write.o 00:31:44.966 CC lib/vmd/led.o 00:31:44.966 CC lib/conf/conf.o 00:31:44.966 CC lib/idxd/idxd.o 00:31:44.966 CC lib/env_dpdk/env.o 00:31:44.966 CC lib/env_dpdk/memory.o 00:31:44.966 CC lib/env_dpdk/pci.o 00:31:44.966 CC lib/idxd/idxd_user.o 00:31:44.966 CC lib/env_dpdk/init.o 00:31:44.966 LIB libspdk_conf.a 00:31:44.966 LIB libspdk_json.a 00:31:44.966 LIB libspdk_rdma.a 00:31:44.966 CC lib/env_dpdk/threads.o 00:31:45.224 CC lib/env_dpdk/pci_ioat.o 00:31:45.224 LIB libspdk_vmd.a 00:31:45.224 CC lib/jsonrpc/jsonrpc_server.o 00:31:45.224 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:31:45.224 CC lib/jsonrpc/jsonrpc_client.o 00:31:45.224 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:31:45.224 CC lib/env_dpdk/pci_virtio.o 00:31:45.224 CC lib/env_dpdk/pci_vmd.o 00:31:45.224 CC lib/env_dpdk/pci_idxd.o 00:31:45.224 CC lib/env_dpdk/pci_event.o 00:31:45.224 LIB libspdk_idxd.a 00:31:45.224 CC lib/env_dpdk/sigbus_handler.o 00:31:45.224 CC lib/env_dpdk/pci_dpdk.o 00:31:45.224 CC lib/env_dpdk/pci_dpdk_2207.o 00:31:45.224 CC lib/env_dpdk/pci_dpdk_2211.o 00:31:45.224 LIB libspdk_jsonrpc.a 00:31:45.483 CC lib/rpc/rpc.o 00:31:45.741 LIB libspdk_rpc.a 00:31:45.741 LIB libspdk_env_dpdk.a 00:31:45.741 CC lib/notify/notify.o 00:31:45.741 CC lib/notify/notify_rpc.o 00:31:45.741 CC lib/sock/sock.o 00:31:45.741 CC lib/sock/sock_rpc.o 00:31:45.741 CC lib/trace/trace.o 00:31:45.741 CC lib/trace/trace_flags.o 00:31:45.741 CC lib/trace/trace_rpc.o 00:31:45.741 LIB libspdk_notify.a 00:31:45.999 LIB libspdk_trace.a 00:31:45.999 LIB libspdk_sock.a 00:31:45.999 CC lib/thread/thread.o 00:31:45.999 CC lib/thread/iobuf.o 00:31:45.999 CC lib/nvme/nvme_ctrlr.o 00:31:45.999 CC lib/nvme/nvme_ctrlr_cmd.o 00:31:45.999 CC lib/nvme/nvme_fabric.o 00:31:45.999 CC lib/nvme/nvme_ns.o 00:31:45.999 CC lib/nvme/nvme_ns_cmd.o 00:31:45.999 CC lib/nvme/nvme_qpair.o 00:31:45.999 CC lib/nvme/nvme_pcie.o 00:31:45.999 CC lib/nvme/nvme_pcie_common.o 00:31:46.257 CC lib/nvme/nvme.o 00:31:46.516 LIB libspdk_thread.a 00:31:46.516 CC lib/nvme/nvme_quirks.o 00:31:46.516 CC lib/nvme/nvme_transport.o 00:31:46.516 CC lib/nvme/nvme_discovery.o 00:31:46.516 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:31:46.516 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:31:46.775 CC lib/accel/accel.o 00:31:46.775 CC lib/nvme/nvme_tcp.o 00:31:46.775 CC lib/nvme/nvme_opal.o 00:31:46.775 CC lib/blob/blobstore.o 00:31:47.033 CC lib/blob/request.o 00:31:47.033 CC lib/blob/zeroes.o 00:31:47.033 CC lib/blob/blob_bs_dev.o 00:31:47.033 CC lib/nvme/nvme_io_msg.o 00:31:47.033 CC lib/accel/accel_rpc.o 00:31:47.033 CC lib/accel/accel_sw.o 00:31:47.033 CC lib/nvme/nvme_poll_group.o 00:31:47.033 CC lib/init/json_config.o 00:31:47.033 CC lib/virtio/virtio.o 00:31:47.033 CC lib/virtio/virtio_vhost_user.o 00:31:47.033 CC lib/nvme/nvme_zns.o 00:31:47.291 CC lib/nvme/nvme_cuse.o 00:31:47.291 LIB libspdk_accel.a 00:31:47.291 CC lib/init/subsystem.o 00:31:47.291 CC lib/init/subsystem_rpc.o 00:31:47.291 CC lib/nvme/nvme_vfio_user.o 00:31:47.291 CC lib/virtio/virtio_vfio_user.o 00:31:47.291 CC lib/init/rpc.o 00:31:47.291 CC lib/nvme/nvme_rdma.o 00:31:47.549 CC lib/bdev/bdev.o 00:31:47.549 CC lib/virtio/virtio_pci.o 00:31:47.549 CC lib/bdev/bdev_rpc.o 00:31:47.549 LIB libspdk_init.a 00:31:47.549 CC lib/bdev/bdev_zone.o 00:31:47.549 CC lib/bdev/part.o 00:31:47.549 LIB libspdk_virtio.a 00:31:47.549 CC lib/bdev/scsi_nvme.o 00:31:47.549 CC lib/event/app.o 00:31:47.549 CC lib/event/reactor.o 00:31:47.549 CC lib/event/log_rpc.o 00:31:47.549 CC lib/event/app_rpc.o 00:31:47.808 CC lib/event/scheduler_static.o 00:31:47.808 LIB libspdk_blob.a 00:31:47.808 LIB libspdk_event.a 00:31:48.066 CC lib/blobfs/blobfs.o 00:31:48.066 CC lib/blobfs/tree.o 00:31:48.066 CC lib/lvol/lvol.o 00:31:48.066 LIB libspdk_nvme.a 00:31:48.325 LIB libspdk_blobfs.a 00:31:48.325 LIB libspdk_lvol.a 00:31:48.325 LIB libspdk_bdev.a 00:31:48.585 CC lib/scsi/dev.o 00:31:48.585 CC lib/scsi/lun.o 00:31:48.585 CC lib/scsi/port.o 00:31:48.585 CC lib/scsi/scsi.o 00:31:48.585 CC lib/ftl/ftl_core.o 00:31:48.585 CC lib/nbd/nbd.o 00:31:48.585 CC lib/nbd/nbd_rpc.o 00:31:48.585 CC lib/ftl/ftl_init.o 00:31:48.585 CC lib/ftl/ftl_layout.o 00:31:48.585 CC lib/nvmf/ctrlr.o 00:31:48.585 CC lib/nvmf/ctrlr_discovery.o 00:31:48.585 CC lib/scsi/scsi_bdev.o 00:31:48.585 CC lib/scsi/scsi_pr.o 00:31:48.585 CC lib/scsi/scsi_rpc.o 00:31:48.585 CC lib/nvmf/ctrlr_bdev.o 00:31:48.585 CC lib/ftl/ftl_debug.o 00:31:48.845 CC lib/ftl/ftl_io.o 00:31:48.845 LIB libspdk_nbd.a 00:31:48.845 CC lib/ftl/ftl_sb.o 00:31:48.845 CC lib/ftl/ftl_l2p.o 00:31:48.845 CC lib/ftl/ftl_l2p_flat.o 00:31:48.845 CC lib/ftl/ftl_nv_cache.o 00:31:48.845 CC lib/ftl/ftl_band.o 00:31:48.845 CC lib/scsi/task.o 00:31:48.845 CC lib/ftl/ftl_band_ops.o 00:31:48.845 CC lib/nvmf/subsystem.o 00:31:48.845 CC lib/nvmf/nvmf.o 00:31:48.845 CC lib/ftl/ftl_writer.o 00:31:48.845 CC lib/ftl/ftl_rq.o 00:31:48.845 CC lib/ftl/ftl_reloc.o 00:31:49.104 LIB libspdk_scsi.a 00:31:49.104 CC lib/ftl/ftl_l2p_cache.o 00:31:49.104 CC lib/ftl/ftl_p2l.o 00:31:49.104 CC lib/ftl/mngt/ftl_mngt.o 00:31:49.104 CC lib/nvmf/nvmf_rpc.o 00:31:49.104 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:31:49.104 CC lib/iscsi/conn.o 00:31:49.104 CC lib/vhost/vhost.o 00:31:49.104 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:31:49.104 CC lib/ftl/mngt/ftl_mngt_startup.o 00:31:49.104 CC lib/ftl/mngt/ftl_mngt_md.o 00:31:49.104 CC lib/nvmf/transport.o 00:31:49.364 CC lib/nvmf/tcp.o 00:31:49.364 CC lib/nvmf/rdma.o 00:31:49.364 CC lib/ftl/mngt/ftl_mngt_misc.o 00:31:49.364 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:31:49.364 CC lib/vhost/vhost_rpc.o 00:31:49.364 CC lib/vhost/vhost_scsi.o 00:31:49.364 CC lib/vhost/vhost_blk.o 00:31:49.364 CC lib/iscsi/init_grp.o 00:31:49.364 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:31:49.364 CC lib/ftl/mngt/ftl_mngt_band.o 00:31:49.364 CC lib/iscsi/iscsi.o 00:31:49.623 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:31:49.623 CC lib/vhost/rte_vhost_user.o 00:31:49.623 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:31:49.623 CC lib/iscsi/md5.o 00:31:49.623 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:31:49.882 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:31:49.882 CC lib/ftl/utils/ftl_conf.o 00:31:49.882 CC lib/iscsi/param.o 00:31:49.882 CC lib/ftl/utils/ftl_md.o 00:31:49.882 CC lib/ftl/utils/ftl_mempool.o 00:31:49.882 CC lib/ftl/utils/ftl_bitmap.o 00:31:49.882 CC lib/ftl/utils/ftl_property.o 00:31:49.882 CC lib/iscsi/portal_grp.o 00:31:49.882 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:31:49.882 LIB libspdk_nvmf.a 00:31:49.882 CC lib/iscsi/tgt_node.o 00:31:49.882 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:31:49.882 CC lib/iscsi/iscsi_subsystem.o 00:31:49.882 CC lib/iscsi/iscsi_rpc.o 00:31:50.141 CC lib/iscsi/task.o 00:31:50.141 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:31:50.141 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:31:50.141 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:31:50.141 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:31:50.141 LIB libspdk_vhost.a 00:31:50.141 CC lib/ftl/upgrade/ftl_sb_v3.o 00:31:50.141 CC lib/ftl/upgrade/ftl_sb_v5.o 00:31:50.141 CC lib/ftl/nvc/ftl_nvc_dev.o 00:31:50.141 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:31:50.141 CC lib/ftl/base/ftl_base_dev.o 00:31:50.141 CC lib/ftl/base/ftl_base_bdev.o 00:31:50.141 LIB libspdk_iscsi.a 00:31:50.400 LIB libspdk_ftl.a 00:31:50.660 CC module/env_dpdk/env_dpdk_rpc.o 00:31:50.660 CC module/sock/posix/posix.o 00:31:50.660 CC module/blob/bdev/blob_bdev.o 00:31:50.660 CC module/accel/ioat/accel_ioat.o 00:31:50.660 CC module/accel/dsa/accel_dsa.o 00:31:50.660 CC module/accel/error/accel_error.o 00:31:50.660 CC module/scheduler/gscheduler/gscheduler.o 00:31:50.660 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:31:50.660 CC module/scheduler/dynamic/scheduler_dynamic.o 00:31:50.660 CC module/accel/iaa/accel_iaa.o 00:31:50.660 LIB libspdk_env_dpdk_rpc.a 00:31:50.660 CC module/accel/iaa/accel_iaa_rpc.o 00:31:50.660 LIB libspdk_scheduler_dpdk_governor.a 00:31:50.660 LIB libspdk_scheduler_gscheduler.a 00:31:50.660 CC module/accel/ioat/accel_ioat_rpc.o 00:31:50.660 CC module/accel/dsa/accel_dsa_rpc.o 00:31:50.660 LIB libspdk_scheduler_dynamic.a 00:31:50.660 CC module/accel/error/accel_error_rpc.o 00:31:50.919 LIB libspdk_blob_bdev.a 00:31:50.919 LIB libspdk_accel_iaa.a 00:31:50.919 LIB libspdk_accel_ioat.a 00:31:50.919 LIB libspdk_accel_error.a 00:31:50.919 LIB libspdk_accel_dsa.a 00:31:50.919 CC module/bdev/malloc/bdev_malloc.o 00:31:50.919 CC module/blobfs/bdev/blobfs_bdev.o 00:31:50.919 CC module/bdev/error/vbdev_error.o 00:31:50.919 CC module/bdev/lvol/vbdev_lvol.o 00:31:50.919 CC module/bdev/delay/vbdev_delay.o 00:31:50.919 CC module/bdev/gpt/gpt.o 00:31:50.919 CC module/bdev/nvme/bdev_nvme.o 00:31:50.919 CC module/bdev/passthru/vbdev_passthru.o 00:31:50.919 CC module/bdev/null/bdev_null.o 00:31:50.919 LIB libspdk_sock_posix.a 00:31:51.178 CC module/bdev/gpt/vbdev_gpt.o 00:31:51.178 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:31:51.178 CC module/bdev/nvme/bdev_nvme_rpc.o 00:31:51.178 CC module/bdev/error/vbdev_error_rpc.o 00:31:51.178 CC module/bdev/malloc/bdev_malloc_rpc.o 00:31:51.178 CC module/bdev/delay/vbdev_delay_rpc.o 00:31:51.178 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:31:51.178 CC module/bdev/null/bdev_null_rpc.o 00:31:51.178 LIB libspdk_blobfs_bdev.a 00:31:51.178 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:31:51.178 CC module/bdev/nvme/nvme_rpc.o 00:31:51.178 LIB libspdk_bdev_gpt.a 00:31:51.178 LIB libspdk_bdev_error.a 00:31:51.178 LIB libspdk_bdev_malloc.a 00:31:51.178 LIB libspdk_bdev_delay.a 00:31:51.178 LIB libspdk_bdev_passthru.a 00:31:51.178 CC module/bdev/nvme/bdev_mdns_client.o 00:31:51.178 LIB libspdk_bdev_null.a 00:31:51.178 CC module/bdev/raid/bdev_raid.o 00:31:51.437 CC module/bdev/split/vbdev_split.o 00:31:51.437 CC module/bdev/split/vbdev_split_rpc.o 00:31:51.437 CC module/bdev/nvme/vbdev_opal.o 00:31:51.437 CC module/bdev/zone_block/vbdev_zone_block.o 00:31:51.437 LIB libspdk_bdev_lvol.a 00:31:51.437 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:31:51.437 CC module/bdev/aio/bdev_aio.o 00:31:51.437 CC module/bdev/aio/bdev_aio_rpc.o 00:31:51.437 CC module/bdev/nvme/vbdev_opal_rpc.o 00:31:51.437 LIB libspdk_bdev_split.a 00:31:51.437 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:31:51.437 CC module/bdev/raid/bdev_raid_rpc.o 00:31:51.437 CC module/bdev/ftl/bdev_ftl.o 00:31:51.437 CC module/bdev/ftl/bdev_ftl_rpc.o 00:31:51.437 CC module/bdev/raid/bdev_raid_sb.o 00:31:51.697 LIB libspdk_bdev_zone_block.a 00:31:51.697 LIB libspdk_bdev_aio.a 00:31:51.697 CC module/bdev/raid/raid0.o 00:31:51.697 CC module/bdev/raid/raid1.o 00:31:51.697 CC module/bdev/raid/concat.o 00:31:51.697 CC module/bdev/raid/raid5f.o 00:31:51.697 CC module/bdev/iscsi/bdev_iscsi.o 00:31:51.697 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:31:51.697 LIB libspdk_bdev_ftl.a 00:31:51.697 CC module/bdev/virtio/bdev_virtio_blk.o 00:31:51.697 CC module/bdev/virtio/bdev_virtio_scsi.o 00:31:51.697 CC module/bdev/virtio/bdev_virtio_rpc.o 00:31:51.697 LIB libspdk_bdev_nvme.a 00:31:51.955 LIB libspdk_bdev_raid.a 00:31:51.955 LIB libspdk_bdev_iscsi.a 00:31:51.955 LIB libspdk_bdev_virtio.a 00:31:52.214 CC module/event/subsystems/vmd/vmd.o 00:31:52.214 CC module/event/subsystems/vmd/vmd_rpc.o 00:31:52.214 CC module/event/subsystems/sock/sock.o 00:31:52.215 CC module/event/subsystems/iobuf/iobuf.o 00:31:52.215 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:31:52.215 CC module/event/subsystems/scheduler/scheduler.o 00:31:52.215 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:31:52.473 LIB libspdk_event_vmd.a 00:31:52.473 LIB libspdk_event_vhost_blk.a 00:31:52.473 LIB libspdk_event_sock.a 00:31:52.473 LIB libspdk_event_iobuf.a 00:31:52.473 LIB libspdk_event_scheduler.a 00:31:52.473 CC module/event/subsystems/accel/accel.o 00:31:52.733 LIB libspdk_event_accel.a 00:31:52.733 CC module/event/subsystems/bdev/bdev.o 00:31:52.991 LIB libspdk_event_bdev.a 00:31:53.250 CC module/event/subsystems/nbd/nbd.o 00:31:53.250 CC module/event/subsystems/scsi/scsi.o 00:31:53.250 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:31:53.250 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:31:53.250 LIB libspdk_event_nbd.a 00:31:53.250 LIB libspdk_event_scsi.a 00:31:53.509 LIB libspdk_event_nvmf.a 00:31:53.509 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:31:53.509 CC module/event/subsystems/iscsi/iscsi.o 00:31:53.769 LIB libspdk_event_vhost_scsi.a 00:31:53.769 LIB libspdk_event_iscsi.a 00:31:53.769 CXX app/trace/trace.o 00:31:54.029 CC examples/sock/hello_world/hello_sock.o 00:31:54.029 CC examples/ioat/perf/perf.o 00:31:54.029 CC examples/accel/perf/accel_perf.o 00:31:54.029 CC examples/nvme/hello_world/hello_world.o 00:31:54.029 CC examples/vmd/lsvmd/lsvmd.o 00:31:54.029 CC examples/bdev/hello_world/hello_bdev.o 00:31:54.029 CC examples/blob/hello_world/hello_blob.o 00:31:54.029 CC examples/nvmf/nvmf/nvmf.o 00:31:54.029 CC test/accel/dif/dif.o 00:31:54.029 LINK lsvmd 00:31:54.288 LINK ioat_perf 00:31:54.288 LINK hello_world 00:31:54.288 LINK hello_sock 00:31:54.288 LINK hello_blob 00:31:54.288 LINK hello_bdev 00:31:54.288 LINK nvmf 00:31:54.288 LINK spdk_trace 00:31:54.288 LINK accel_perf 00:31:54.288 LINK dif 00:32:02.529 CC examples/ioat/verify/verify.o 00:32:02.529 LINK verify 00:32:03.936 CC app/trace_record/trace_record.o 00:32:04.876 LINK spdk_trace_record 00:32:05.449 CC examples/nvme/reconnect/reconnect.o 00:32:06.826 LINK reconnect 00:32:09.360 CC examples/nvme/nvme_manage/nvme_manage.o 00:32:09.618 CC examples/vmd/led/led.o 00:32:10.186 LINK led 00:32:10.445 LINK nvme_manage 00:32:11.832 CC app/nvmf_tgt/nvmf_main.o 00:32:12.400 LINK nvmf_tgt 00:32:12.659 CC app/iscsi_tgt/iscsi_tgt.o 00:32:12.918 CC app/spdk_tgt/spdk_tgt.o 00:32:13.485 LINK iscsi_tgt 00:32:14.053 LINK spdk_tgt 00:32:40.609 CC app/spdk_lspci/spdk_lspci.o 00:32:40.609 CC app/spdk_nvme_perf/perf.o 00:32:40.609 LINK spdk_lspci 00:32:40.609 LINK spdk_nvme_perf 00:32:43.894 CC examples/nvme/arbitration/arbitration.o 00:32:46.425 LINK arbitration 00:33:18.503 CC examples/nvme/hotplug/hotplug.o 00:33:18.503 LINK hotplug 00:33:28.494 CC examples/util/zipf/zipf.o 00:33:29.062 CC examples/bdev/bdevperf/bdevperf.o 00:33:29.062 LINK zipf 00:33:32.352 LINK bdevperf 00:33:36.542 CC app/spdk_nvme_identify/identify.o 00:33:38.492 LINK spdk_nvme_identify 00:33:41.796 CC examples/blob/cli/blobcli.o 00:33:41.796 CC examples/thread/thread/thread_ex.o 00:33:42.734 LINK thread 00:33:42.734 LINK blobcli 00:33:46.024 CC test/app/bdev_svc/bdev_svc.o 00:33:46.283 LINK bdev_svc 00:33:46.283 CC examples/nvme/cmb_copy/cmb_copy.o 00:33:47.220 LINK cmb_copy 00:33:57.215 CC examples/nvme/abort/abort.o 00:33:57.783 LINK abort 00:34:19.757 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:34:19.757 LINK pmr_persistence 00:34:19.757 CC test/bdev/bdevio/bdevio.o 00:34:20.326 LINK bdevio 00:34:30.306 CC test/blobfs/mkfs/mkfs.o 00:34:30.565 LINK mkfs 00:34:35.836 CC examples/interrupt_tgt/interrupt_tgt.o 00:34:36.094 CC examples/idxd/perf/perf.o 00:34:36.661 LINK interrupt_tgt 00:34:37.229 LINK idxd_perf 00:34:49.439 TEST_HEADER include/spdk/config.h 00:34:49.439 CXX test/cpp_headers/accel.o 00:34:49.698 CXX test/cpp_headers/accel_module.o 00:34:51.075 CXX test/cpp_headers/assert.o 00:34:51.334 CC app/spdk_nvme_discover/discovery_aer.o 00:34:51.900 CXX test/cpp_headers/barrier.o 00:34:52.837 LINK spdk_nvme_discover 00:34:53.097 CXX test/cpp_headers/base64.o 00:34:54.475 CXX test/cpp_headers/bdev.o 00:34:55.858 CXX test/cpp_headers/bdev_module.o 00:34:57.235 CXX test/cpp_headers/bdev_zone.o 00:34:58.613 CXX test/cpp_headers/bit_array.o 00:34:59.550 CXX test/cpp_headers/bit_pool.o 00:35:00.925 CXX test/cpp_headers/blob.o 00:35:01.862 CXX test/cpp_headers/blob_bdev.o 00:35:03.240 CXX test/cpp_headers/blobfs.o 00:35:03.240 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:35:04.177 CXX test/cpp_headers/blobfs_bdev.o 00:35:05.554 LINK nvme_fuzz 00:35:05.554 CXX test/cpp_headers/conf.o 00:35:06.933 CXX test/cpp_headers/config.o 00:35:07.192 CXX test/cpp_headers/cpuset.o 00:35:08.572 CXX test/cpp_headers/crc16.o 00:35:09.512 CXX test/cpp_headers/crc32.o 00:35:10.888 CXX test/cpp_headers/crc64.o 00:35:12.267 CXX test/cpp_headers/dif.o 00:35:13.205 CXX test/cpp_headers/dma.o 00:35:14.583 CXX test/cpp_headers/endian.o 00:35:15.518 CXX test/cpp_headers/env.o 00:35:16.897 CXX test/cpp_headers/env_dpdk.o 00:35:18.277 CXX test/cpp_headers/event.o 00:35:19.657 CXX test/cpp_headers/fd.o 00:35:20.593 CXX test/cpp_headers/fd_group.o 00:35:21.971 CXX test/cpp_headers/file.o 00:35:22.908 CXX test/cpp_headers/ftl.o 00:35:22.908 CXX test/cpp_headers/gpt_spec.o 00:35:23.846 CXX test/cpp_headers/hexlify.o 00:35:24.105 CC test/app/histogram_perf/histogram_perf.o 00:35:24.673 LINK histogram_perf 00:35:24.673 CXX test/cpp_headers/histogram_data.o 00:35:25.608 CXX test/cpp_headers/idxd.o 00:35:26.546 CXX test/cpp_headers/idxd_spec.o 00:35:27.114 CXX test/cpp_headers/init.o 00:35:28.052 CXX test/cpp_headers/ioat.o 00:35:28.620 CC app/spdk_top/spdk_top.o 00:35:28.620 CXX test/cpp_headers/ioat_spec.o 00:35:29.574 CXX test/cpp_headers/iscsi_spec.o 00:35:30.174 CXX test/cpp_headers/json.o 00:35:30.445 LINK spdk_top 00:35:31.013 CXX test/cpp_headers/jsonrpc.o 00:35:31.272 CXX test/cpp_headers/likely.o 00:35:31.839 CXX test/cpp_headers/log.o 00:35:32.098 CC app/vhost/vhost.o 00:35:32.666 CXX test/cpp_headers/lvol.o 00:35:32.666 LINK vhost 00:35:33.234 CXX test/cpp_headers/memory.o 00:35:33.234 CXX test/cpp_headers/mmio.o 00:35:33.802 CXX test/cpp_headers/nbd.o 00:35:33.802 CC test/app/jsoncat/jsoncat.o 00:35:33.802 CXX test/cpp_headers/notify.o 00:35:34.061 LINK jsoncat 00:35:34.320 CXX test/cpp_headers/nvme.o 00:35:34.320 CC test/app/stub/stub.o 00:35:34.580 CC app/spdk_dd/spdk_dd.o 00:35:34.580 CXX test/cpp_headers/nvme_intel.o 00:35:34.580 LINK stub 00:35:35.147 CXX test/cpp_headers/nvme_ocssd.o 00:35:35.147 LINK spdk_dd 00:35:35.715 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:35:35.715 CXX test/cpp_headers/nvme_ocssd_spec.o 00:35:35.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:35:36.233 CXX test/cpp_headers/nvme_spec.o 00:35:36.492 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:35:36.492 CXX test/cpp_headers/nvme_zns.o 00:35:37.060 CXX test/cpp_headers/nvmf.o 00:35:37.319 LINK vhost_fuzz 00:35:37.578 LINK iscsi_fuzz 00:35:37.578 CXX test/cpp_headers/nvmf_cmd.o 00:35:38.516 CXX test/cpp_headers/nvmf_fc_spec.o 00:35:38.516 CXX test/cpp_headers/nvmf_spec.o 00:35:39.084 CXX test/cpp_headers/nvmf_transport.o 00:35:39.652 CC app/fio/nvme/fio_plugin.o 00:35:39.652 CXX test/cpp_headers/opal.o 00:35:40.589 CXX test/cpp_headers/opal_spec.o 00:35:41.157 CXX test/cpp_headers/pci_ids.o 00:35:41.157 LINK spdk_nvme 00:35:42.094 CXX test/cpp_headers/pipe.o 00:35:42.094 CC app/fio/bdev/fio_plugin.o 00:35:42.663 CXX test/cpp_headers/queue.o 00:35:42.929 CXX test/cpp_headers/reduce.o 00:35:43.863 CXX test/cpp_headers/rpc.o 00:35:43.863 LINK spdk_bdev 00:35:44.795 CXX test/cpp_headers/scheduler.o 00:35:45.729 CXX test/cpp_headers/scsi.o 00:35:46.665 CXX test/cpp_headers/scsi_spec.o 00:35:48.043 CXX test/cpp_headers/sock.o 00:35:48.980 CXX test/cpp_headers/stdinc.o 00:35:49.916 CXX test/cpp_headers/string.o 00:35:50.853 CXX test/cpp_headers/thread.o 00:35:52.230 CXX test/cpp_headers/trace.o 00:35:52.798 CXX test/cpp_headers/trace_parser.o 00:35:53.737 CXX test/cpp_headers/tree.o 00:35:53.996 CXX test/cpp_headers/ublk.o 00:35:54.933 CXX test/cpp_headers/util.o 00:35:54.933 CXX test/cpp_headers/uuid.o 00:35:55.503 CXX test/cpp_headers/version.o 00:35:55.762 CXX test/cpp_headers/vfio_user_pci.o 00:35:56.022 CXX test/cpp_headers/vfio_user_spec.o 00:35:56.590 CXX test/cpp_headers/vhost.o 00:35:56.590 CXX test/cpp_headers/vmd.o 00:35:57.527 CXX test/cpp_headers/xor.o 00:35:57.527 CXX test/cpp_headers/zipf.o 00:35:57.786 CC test/dma/test_dma/test_dma.o 00:35:59.163 CC test/event/event_perf/event_perf.o 00:35:59.163 LINK test_dma 00:35:59.163 CC test/env/mem_callbacks/mem_callbacks.o 00:35:59.769 LINK event_perf 00:36:00.043 LINK mem_callbacks 00:36:01.946 CC test/env/vtophys/vtophys.o 00:36:02.205 LINK vtophys 00:36:02.205 CC test/event/reactor/reactor.o 00:36:03.142 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:36:03.142 LINK reactor 00:36:03.400 CC test/env/memory/memory_ut.o 00:36:03.967 LINK env_dpdk_post_init 00:36:04.903 LINK memory_ut 00:36:13.018 CC test/env/pci/pci_ut.o 00:36:13.956 CC test/event/reactor_perf/reactor_perf.o 00:36:13.956 LINK pci_ut 00:36:14.215 CC test/lvol/esnap/esnap.o 00:36:14.473 LINK reactor_perf 00:36:15.848 CC test/event/app_repeat/app_repeat.o 00:36:16.418 LINK app_repeat 00:36:18.325 CC test/nvme/aer/aer.o 00:36:19.704 LINK aer 00:36:21.083 CC test/nvme/reset/reset.o 00:36:22.021 LINK reset 00:36:25.311 LINK esnap 00:36:28.600 CC test/nvme/sgl/sgl.o 00:36:29.539 LINK sgl 00:36:32.076 CC test/nvme/e2edp/nvme_dp.o 00:36:33.455 LINK nvme_dp 00:37:00.010 CC test/event/scheduler/scheduler.o 00:37:00.010 CC test/nvme/overhead/overhead.o 00:37:00.010 LINK scheduler 00:37:00.010 LINK overhead 00:37:00.010 CC test/nvme/err_injection/err_injection.o 00:37:00.010 LINK err_injection 00:37:01.912 CC test/rpc_client/rpc_client_test.o 00:37:01.912 CC test/nvme/startup/startup.o 00:37:02.480 LINK rpc_client_test 00:37:02.480 LINK startup 00:37:05.018 CC test/nvme/reserve/reserve.o 00:37:05.641 LINK reserve 00:37:09.851 CC test/nvme/simple_copy/simple_copy.o 00:37:11.229 LINK simple_copy 00:37:13.134 CC test/nvme/connect_stress/connect_stress.o 00:37:14.072 LINK connect_stress 00:37:18.262 CC test/thread/poller_perf/poller_perf.o 00:37:18.262 LINK poller_perf 00:37:22.455 CC test/thread/lock/spdk_lock.o 00:37:22.455 CC test/nvme/boot_partition/boot_partition.o 00:37:23.393 LINK boot_partition 00:37:25.929 LINK spdk_lock 00:37:27.841 CC test/nvme/compliance/nvme_compliance.o 00:37:28.407 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:37:28.974 LINK nvme_compliance 00:37:29.233 LINK histogram_ut 00:37:33.432 CC test/nvme/fused_ordering/fused_ordering.o 00:37:33.690 LINK fused_ordering 00:37:34.629 CC test/unit/lib/accel/accel.c/accel_ut.o 00:37:35.567 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:37:36.504 CC test/nvme/doorbell_aers/doorbell_aers.o 00:37:37.071 LINK doorbell_aers 00:37:37.639 LINK accel_ut 00:37:37.639 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:37:39.017 CC test/unit/lib/blob/blob.c/blob_ut.o 00:37:39.276 LINK blob_bdev_ut 00:37:41.812 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:37:42.751 LINK tree_ut 00:37:46.042 LINK bdev_ut 00:37:46.042 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:37:47.423 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:37:47.423 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:37:47.423 CC test/nvme/fdp/fdp.o 00:37:47.991 LINK blobfs_async_ut 00:37:47.991 LINK blobfs_bdev_ut 00:37:48.640 LINK fdp 00:37:49.232 LINK blobfs_sync_ut 00:37:49.492 LINK blob_ut 00:37:50.430 CC test/unit/lib/dma/dma.c/dma_ut.o 00:37:50.999 CC test/unit/lib/event/app.c/app_ut.o 00:37:51.258 LINK dma_ut 00:37:51.517 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:37:53.425 LINK app_ut 00:37:53.993 LINK reactor_ut 00:37:58.185 CC test/nvme/cuse/cuse.o 00:38:00.721 LINK cuse 00:38:01.290 CC test/unit/lib/bdev/part.c/part_ut.o 00:38:01.550 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:38:01.810 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:38:02.070 LINK scsi_nvme_ut 00:38:02.639 LINK ioat_ut 00:38:03.209 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:38:03.778 LINK gpt_ut 00:38:04.037 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:38:04.296 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:38:04.296 LINK part_ut 00:38:04.864 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:38:05.122 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:38:05.122 LINK conn_ut 00:38:05.122 LINK vbdev_lvol_ut 00:38:05.122 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:38:05.381 LINK bdev_zone_ut 00:38:05.950 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:38:06.520 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:38:06.520 LINK bdev_raid_sb_ut 00:38:06.520 LINK bdev_raid_ut 00:38:06.779 LINK init_grp_ut 00:38:07.039 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:38:07.039 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:38:07.039 LINK bdev_ut 00:38:07.608 LINK concat_ut 00:38:07.867 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:38:08.436 LINK raid1_ut 00:38:08.436 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:38:08.436 CC test/unit/lib/log/log.c/log_ut.o 00:38:08.695 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:38:08.695 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:38:08.954 LINK iscsi_ut 00:38:08.954 LINK log_ut 00:38:08.954 LINK jsonrpc_server_ut 00:38:08.954 CC test/unit/lib/iscsi/param.c/param_ut.o 00:38:09.214 LINK json_util_ut 00:38:09.781 LINK param_ut 00:38:10.040 LINK json_parse_ut 00:38:10.978 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:38:10.978 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:38:10.978 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:38:11.548 LINK tgt_node_ut 00:38:11.807 LINK portal_grp_ut 00:38:12.376 LINK json_write_ut 00:38:12.376 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:38:12.376 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:38:13.315 LINK vbdev_zone_block_ut 00:38:13.315 LINK raid5f_ut 00:38:13.574 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:38:13.833 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:38:14.092 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:38:14.092 CC test/unit/lib/notify/notify.c/notify_ut.o 00:38:14.352 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:38:14.611 LINK notify_ut 00:38:15.548 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:38:15.548 LINK nvme_ut 00:38:15.548 LINK lvol_ut 00:38:15.548 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:38:16.117 LINK bdev_nvme_ut 00:38:16.117 LINK nvme_ctrlr_ut 00:38:16.117 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:38:16.117 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:38:16.117 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:38:16.380 LINK nvme_ctrlr_cmd_ut 00:38:17.316 LINK nvme_ctrlr_ocssd_cmd_ut 00:38:17.574 LINK nvme_ns_ut 00:38:17.833 LINK tcp_ut 00:38:18.772 LINK nvme_ns_cmd_ut 00:38:19.032 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:38:19.600 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:38:20.168 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:38:20.427 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:38:20.996 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:38:21.255 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:38:21.255 LINK nvme_ns_ocssd_cmd_ut 00:38:21.514 LINK ctrlr_ut 00:38:21.514 LINK dev_ut 00:38:21.774 LINK nvme_poll_group_ut 00:38:22.048 LINK nvme_pcie_ut 00:38:22.682 LINK nvme_qpair_ut 00:38:22.682 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:38:22.941 LINK lun_ut 00:38:23.200 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:38:23.458 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:38:23.458 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:38:23.717 LINK nvme_quirks_ut 00:38:23.717 LINK scsi_ut 00:38:23.976 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:38:24.544 LINK scsi_bdev_ut 00:38:24.544 LINK nvme_tcp_ut 00:38:24.544 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:38:24.544 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:38:24.803 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:38:24.803 CC test/unit/lib/sock/sock.c/sock_ut.o 00:38:25.061 LINK scsi_pr_ut 00:38:25.319 LINK nvme_transport_ut 00:38:25.319 CC test/unit/lib/sock/posix.c/posix_ut.o 00:38:25.578 LINK sock_ut 00:38:25.578 LINK subsystem_ut 00:38:25.578 CC test/unit/lib/thread/thread.c/thread_ut.o 00:38:26.146 LINK posix_ut 00:38:26.146 CC test/unit/lib/util/base64.c/base64_ut.o 00:38:26.404 LINK base64_ut 00:38:26.972 LINK thread_ut 00:38:27.231 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:38:27.231 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:38:27.231 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:38:27.490 LINK cpuset_ut 00:38:27.490 LINK bit_array_ut 00:38:28.059 LINK nvme_io_msg_ut 00:38:28.059 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:38:28.318 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:38:28.318 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:38:28.318 LINK crc16_ut 00:38:28.577 LINK crc32_ieee_ut 00:38:28.577 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:38:28.577 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:38:28.836 LINK nvme_pcie_common_ut 00:38:28.836 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:38:28.836 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:38:28.836 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:38:29.094 LINK nvme_opal_ut 00:38:29.094 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:38:29.094 LINK crc32c_ut 00:38:29.094 LINK nvme_fabric_ut 00:38:29.094 CC test/unit/lib/util/dif.c/dif_ut.o 00:38:29.094 LINK crc64_ut 00:38:29.354 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:38:29.354 LINK ctrlr_discovery_ut 00:38:29.354 CC test/unit/lib/util/iov.c/iov_ut.o 00:38:29.613 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:38:29.613 LINK iobuf_ut 00:38:29.613 LINK dif_ut 00:38:29.613 LINK iov_ut 00:38:29.871 LINK nvme_rdma_ut 00:38:29.871 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:38:29.871 LINK ctrlr_bdev_ut 00:38:30.130 LINK pci_event_ut 00:38:30.699 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:38:31.267 LINK subsystem_ut 00:38:31.267 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:38:31.526 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:38:31.526 LINK rpc_ut 00:38:31.526 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:38:32.094 LINK idxd_user_ut 00:38:32.094 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:38:32.094 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:38:32.094 CC test/unit/lib/util/math.c/math_ut.o 00:38:32.094 CC test/unit/lib/util/string.c/string_ut.o 00:38:32.094 LINK nvme_cuse_ut 00:38:32.354 LINK math_ut 00:38:32.354 LINK pipe_ut 00:38:32.354 LINK string_ut 00:38:32.354 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:38:32.354 LINK idxd_ut 00:38:32.613 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:38:32.613 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:38:32.873 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:38:32.873 LINK nvmf_ut 00:38:33.133 CC test/unit/lib/rdma/common.c/common_ut.o 00:38:33.392 CC test/unit/lib/util/xor.c/xor_ut.o 00:38:33.392 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:38:33.392 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:38:33.392 LINK common_ut 00:38:33.392 LINK transport_ut 00:38:33.652 LINK xor_ut 00:38:33.652 LINK rdma_ut 00:38:33.652 LINK ftl_l2p_ut 00:38:33.652 LINK vhost_ut 00:38:33.912 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:38:34.172 LINK ftl_band_ut 00:38:34.431 LINK ftl_io_ut 00:38:34.997 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:38:34.997 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:38:35.255 LINK ftl_bitmap_ut 00:38:35.515 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:38:35.515 LINK ftl_mempool_ut 00:38:35.774 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:38:36.710 LINK ftl_mngt_ut 00:38:37.275 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:38:37.275 LINK ftl_sb_ut 00:38:37.843 LINK ftl_layout_upgrade_ut 00:39:09.935 json_parse_ut.c: In function ‘test_parse_nesting’: 00:39:09.935 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:39:09.935 616 | test_parse_nesting(void) 00:39:09.935 | ^ 00:39:09.935 01:21:40 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:39:09.935 make[1]: Nothing to be done for 'clean'. 00:39:11.314 01:21:45 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:39:11.314 01:21:45 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:39:11.314 01:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:39:11.314 01:21:45 -- spdk/autopackage.sh@48 -- $ timing_finish 00:39:11.314 01:21:45 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:11.314 01:21:45 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:11.314 01:21:45 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:11.573 + [[ -n 2805 ]] 00:39:11.573 + sudo kill 2805 00:39:11.582 [Pipeline] } 00:39:11.598 [Pipeline] // timeout 00:39:11.604 [Pipeline] } 00:39:11.619 [Pipeline] // stage 00:39:11.625 [Pipeline] } 00:39:11.639 [Pipeline] // catchError 00:39:11.649 [Pipeline] stage 00:39:11.652 [Pipeline] { (Stop VM) 00:39:11.664 [Pipeline] sh 00:39:11.948 + vagrant halt 00:39:15.235 ==> default: Halting domain... 00:39:25.229 [Pipeline] sh 00:39:25.568 + vagrant destroy -f 00:39:28.139 ==> default: Removing domain... 00:39:29.091 [Pipeline] sh 00:39:29.373 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:39:29.383 [Pipeline] } 00:39:29.401 [Pipeline] // stage 00:39:29.406 [Pipeline] } 00:39:29.423 [Pipeline] // dir 00:39:29.428 [Pipeline] } 00:39:29.445 [Pipeline] // wrap 00:39:29.453 [Pipeline] } 00:39:29.467 [Pipeline] // catchError 00:39:29.477 [Pipeline] stage 00:39:29.480 [Pipeline] { (Epilogue) 00:39:29.494 [Pipeline] sh 00:39:29.778 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:44.677 [Pipeline] catchError 00:39:44.680 [Pipeline] { 00:39:44.694 [Pipeline] sh 00:39:44.982 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:45.241 Artifacts sizes are good 00:39:45.250 [Pipeline] } 00:39:45.267 [Pipeline] // catchError 00:39:45.278 [Pipeline] archiveArtifacts 00:39:45.285 Archiving artifacts 00:39:45.591 [Pipeline] cleanWs 00:39:45.604 [WS-CLEANUP] Deleting project workspace... 00:39:45.604 [WS-CLEANUP] Deferred wipeout is used... 00:39:45.610 [WS-CLEANUP] done 00:39:45.612 [Pipeline] } 00:39:45.624 [Pipeline] // stage 00:39:45.630 [Pipeline] } 00:39:45.643 [Pipeline] // node 00:39:45.648 [Pipeline] End of Pipeline 00:39:45.676 Finished: SUCCESS